[Xen-devel] [linux-next test] 127474: regressions - FAIL

2018-09-10 Thread osstest service owner
flight 127474 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127474/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
127443
 test-amd64-i386-rumprun-i386  7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 127443
 test-amd64-i386-xl-qemuu-win10-i386  7 xen-boot  fail REGR. vs. 127443
 test-amd64-amd64-xl-qemut-ws16-amd64  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-qemuu-nested-intel  7 xen-boot  fail REGR. vs. 127443
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  7 xen-bootfail REGR. vs. 127443
 test-amd64-amd64-xl-qemut-win7-amd64  7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-xl-qemuu-ws16-amd64  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-xl-qemuu-ovmf-amd64  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-libvirt-xsm  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 7 xen-boot fail REGR. vs. 
127443
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-boot  fail REGR. vs. 127443
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 127443
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 127443
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 127443
 test-amd64-amd64-xl-shadow7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-libvirt   7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
127443
 test-amd64-amd64-xl-pvhv2-amd  7 xen-bootfail REGR. vs. 127443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
127443
 test-amd64-amd64-xl-qemut-win10-i386  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-qemuu-nested-amd  7 xen-bootfail REGR. vs. 127443
 test-amd64-i386-xl-shadow 7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-xl-xsm   7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 127443
 test-amd64-amd64-xl   7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-libvirt  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. 
vs. 127443
 test-amd64-amd64-libvirt-pair 10 xen-boot/src_host   fail REGR. vs. 127443
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 127443
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-libvirt-pair 11 xen-boot/dst_host   fail REGR. vs. 127443
 test-amd64-amd64-xl-pvshim7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 127443
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-boot fail REGR. vs. 
127443
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-libvirt-pair 10 xen-boot/src_hostfail REGR. vs. 127443
 test-amd64-i386-libvirt-pair 11 xen-boot/dst_hostfail REGR. vs. 127443
 test-amd64-amd64-xl-qemut-debianhvm-amd64  7 xen-bootfail REGR. vs. 127443
 test-amd64-amd64-xl-qemuu-win10-i386  7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 127443
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-boot  fail REGR. vs. 127443
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-boot fail REGR. 
vs. 127443
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-boot  fail REGR. vs. 127443
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-boot  fail REGR. vs. 127443
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 127443
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 127443
 test-amd64-i386-xl-pvshim 7 xen-boot fail REGR. vs. 127443
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 127443
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 127443
 

[Xen-devel] [ovmf test] 127483: all pass - PUSHED

2018-09-10 Thread osstest service owner
flight 127483 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127483/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 8e2018f944ed18400f468fd9380284d665535481
baseline version:
 ovmf f4eaaf1a6d50c761e2af9a6dd0976fb8a3bd3c08

Last test of basis   127470  2018-09-10 07:43:57 Z0 days
Testing same since   127483  2018-09-10 20:41:45 Z0 days1 attempts


People who touched revisions under test:
  Jian J Wang 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f4eaaf1a6d..8e2018f944  8e2018f944ed18400f468fd9380284d665535481 -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-linus bisection] complete test-amd64-amd64-qemuu-nested-amd

2018-09-10 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-nested-amd
testid debian-hvm-install

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  93065ac753e4443840a057bfef4be71ec766fde9
  Bug not present: c2343d2761f86ae1b857f78c7cdb9f51e5fa1641
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/127488/


  commit 93065ac753e4443840a057bfef4be71ec766fde9
  Author: Michal Hocko 
  Date:   Tue Aug 21 21:52:33 2018 -0700
  
  mm, oom: distinguish blockable mode for mmu notifiers
  
  There are several blockable mmu notifiers which might sleep in
  mmu_notifier_invalidate_range_start and that is a problem for the
  oom_reaper because it needs to guarantee a forward progress so it cannot
  depend on any sleepable locks.
  
  Currently we simply back off and mark an oom victim with blockable mmu
  notifiers as done after a short sleep.  That can result in selecting a new
  oom victim prematurely because the previous one still hasn't torn its
  memory down yet.
  
  We can do much better though.  Even if mmu notifiers use sleepable locks
  there is no reason to automatically assume those locks are held.  Moreover
  majority of notifiers only care about a portion of the address space and
  there is absolutely zero reason to fail when we are unmapping an unrelated
  range.  Many notifiers do really block and wait for HW which is harder to
  handle and we have to bail out though.
  
  This patch handles the low hanging fruit.
  __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks
  are not allowed to sleep if the flag is set to false.  This is achieved by
  using trylock instead of the sleepable lock for most callbacks and
  continue as long as we do not block down the call chain.
  
  I think we can improve that even further because there is a common pattern
  to do a range lookup first and then do something about that.  The first
  part can be done without a sleeping lock in most cases AFAICS.
  
  The oom_reaper end then simply retries if there is at least one notifier
  which couldn't make any progress in !blockable mode.  A retry loop is
  already implemented to wait for the mmap_sem and this is basically the
  same thing.
  
  The simplest way for driver developers to test this code path is to wrap
  userspace code which uses these notifiers into a memcg and set the hard
  limit to hit the oom.  This can be done e.g.  after the test faults in all
  the mmu notifier managed memory and set the hard limit to something really
  small.  Then we are looking for a proper process tear down.
  
  [a...@linux-foundation.org: coding style fixes]
  [a...@linux-foundation.org: minor code simplification]
  Link: http://lkml.kernel.org/r/20180716115058.5559-1-mho...@kernel.org
  Signed-off-by: Michal Hocko 
  Acked-by: Christian König  # AMD notifiers
  Acked-by: Leon Romanovsky  # mlx and umem_odp
  Reported-by: David Rientjes 
  Cc: "David (ChunMing) Zhou" 
  Cc: Paolo Bonzini 
  Cc: Alex Deucher 
  Cc: David Airlie 
  Cc: Jani Nikula 
  Cc: Joonas Lahtinen 
  Cc: Rodrigo Vivi 
  Cc: Doug Ledford 
  Cc: Jason Gunthorpe 
  Cc: Mike Marciniszyn 
  Cc: Dennis Dalessandro 
  Cc: Sudeep Dutt 
  Cc: Ashutosh Dixit 
  Cc: Dimitri Sivanich 
  Cc: Boris Ostrovsky 
  Cc: Juergen Gross 
  Cc: "Jérôme Glisse" 
  Cc: Andrea Arcangeli 
  Cc: Felix Kuehling 
  Signed-off-by: Andrew Morton 
  Signed-off-by: Linus Torvalds 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/test-amd64-amd64-qemuu-nested-amd.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/test-amd64-amd64-qemuu-nested-amd.debian-hvm-install
 --summary-out=tmp/127488.bisection-summary --basis-template=125898 
--blessings=real,real-bisect linux-linus test-amd64-amd64-qemuu-nested-amd 
debian-hvm-install
Searching for failure / basis pass:
 127458 fail [host=pinot1] / 126310 ok.
Failure / basis pass flights: 127458 / 126310
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware 

[Xen-devel] [ovmf baseline-only test] 75193: trouble: blocked/broken

2018-09-10 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 75193 ovmf real [real]
http://osstest.xensource.com/osstest/logs/75193/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm  broken
 build-i386   broken
 build-amd64-pvopsbroken
 build-i386-xsm   broken
 build-amd64  broken
 build-i386-pvops broken

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 build-i3864 host-install(4)   broken baseline untested
 build-i386-pvops  4 host-install(4)   broken baseline untested
 build-i386-xsm4 host-install(4)   broken baseline untested
 build-amd64-pvops 4 host-install(4)   broken baseline untested
 build-amd64-xsm   4 host-install(4)   broken baseline untested
 build-amd64   4 host-install(4)   broken baseline untested

version targeted for testing:
 ovmf f4eaaf1a6d50c761e2af9a6dd0976fb8a3bd3c08
baseline version:
 ovmf 4b2dc555d8a67e715d8fafab4c9131791d31a788

Last test of basis75190  2018-09-10 07:50:14 Z0 days
Testing same since75193  2018-09-10 20:20:12 Z0 days1 attempts


People who touched revisions under test:
  Jian J Wang 
  Ruiyu Ni 
  Zhang Chao B 
  Zhang, Chao B 

jobs:
 build-amd64-xsm  broken  
 build-i386-xsm   broken  
 build-amd64  broken  
 build-i386   broken  
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopsbroken  
 build-i386-pvops broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xensource.com/osstest/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary

broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-amd64-pvops broken
broken-job build-i386-xsm broken
broken-job build-amd64 broken
broken-job build-i386-pvops broken
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Push not applicable.


commit f4eaaf1a6d50c761e2af9a6dd0976fb8a3bd3c08
Author: Ruiyu Ni 
Date:   Fri Aug 31 16:55:36 2018 +0800

Emulator/Win: Fix build failure using VS2015x86 or old WinSDK

When build with WinSDK <= Win10 TH2, the terminal over CMD.exe
doesn't work. Because Win10 later than TH2 starts to support VT
terminal.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni 
Cc: Michael D Kinney 
Reviewed-by: Hao A Wu 

commit 289cb872edc2b826534b3ff634d25f2430bf87d5
Author: Ruiyu Ni 
Date:   Fri Aug 31 11:35:58 2018 +0800

EmulatorPkg: Update package level Readme.md

Since the emulator under Windows is enabled, the patch changes
README to include the information of emulator under Windows.
It also changes README to Readme.md for better looking.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni 
Cc: Michael D Kinney 
Reviewed-by: Hao A Wu 

commit 34c3405cb74c22a7d81b5aee65f0fc2a45c8dfae
Author: Ruiyu Ni 
Date:   Fri Sep 7 18:12:46 2018 +0800

UefiCpuPkg/PeiCpuException: Fix coding style issue

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni 
Reviewed-by: Dandan Bi 

commit 3a3475331275bca190718557247a37b27b083a2a
Author: Zhang, Chao B 
Date:   Fri Sep 7 16:35:09 2018 +0800

SecurityPkg: HashLib: Change dos format

Change file format to DOS

Cc: Bi Dandan 

[Xen-devel] [linux-3.18 test] 127472: regressions - FAIL

2018-09-10 Thread osstest service owner
flight 127472 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127472/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 16 guest-localmigrate/x10 
fail REGR. vs. 127296

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt 16 guest-start/debian.repeat  fail pass in 127455

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-examine  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 127296
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 127296
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 127296
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 127296
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 127296
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 127296
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 build-arm64-pvops 6 kernel-build fail   never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linuxc0305995d3676c8f7764eb79a7f99de8d18c591a
baseline version:
 linuxba6984fc0162f24a510ebc34e881b546b69c553b

Last test of basis   127296  2018-09-05 07:45:19 Z5 days
Testing same since   127455  2018-09-09 18:12:27 Z1 days2 attempts


People who touched revisions under test:
  "Eric W. Biederman" 
  Al Viro 
  Alexandru Ardelean 
  Andrew Morton 
  Anna Schumaker 
  Bartlomiej Zolnierkiewicz 
  Bartosz Golaszewski 
  

Re: [Xen-devel] v4.19-rc3, bug in __gnttab_unmap_refs_async with HVM domU

2018-09-10 Thread Dongli Zhang
The backtrace of the BUG is quite similar to a hang I encountered:

https://lists.xenproject.org/archives/html/xen-devel/2018-09/msg00454.html

No sure if they are related.

Dongli Zhang

On 09/10/2018 08:37 PM, Olaf Hering wrote:
> While preparing another variant of the fix for the bug in 
> disable_hotplug_cpu, this crash happend for me while starting my HVM domU a 
> second time. dom0 runs Xen 4.7.6.
> I guess it crashed while it did shutdown the domU running a xenlinux based 
> kernel.
> 
> Olaf
> 
> [ 8114.320383] BUG: unable to handle kernel NULL pointer dereference at 
> 0008
> [ 8114.320416] PGD 1fd6a1f067 P4D 1fd6a1f067 PUD 1fd4b4a067 PMD 0
> [ 8114.320427] Oops:  [#1] PREEMPT SMP NOPTI
> [ 8114.320435] CPU: 0 PID: 828 Comm: xenstored Tainted: GE 
> 4.19.321-default-bug1106594 #5
> [ 8114.320444] Hardware name: HP ProLiant SL160z G6 /ProLiant SL160z G6 , 
> BIOS O33 07/28/2009
> [ 8114.320458] RIP: e030:__gnttab_unmap_refs_async+0x29/0x90
> [ 8114.320464] Code: 00 66 66 66 66 90 53 8b 8f 80 00 00 00 31 c0 48 89 fb 48 
> 8b 57 78 85 c9 75 09 eb 49 83 c0 01 39 c8 74 42 4c 63 c0 4e 8b 04 c2 <4d> 8b 
> 48 08 41 f6 c1 01 75 4d 45 8b 40 34
>  41 83 f8 01 7e de 8b 83
> [ 8114.320480] RSP: e02b:c900471d3bd8 EFLAGS: 00010297
> [ 8114.320487] RAX: 0001 RBX: c900471d3c20 RCX: 
> 006c
> [ 8114.320495] RDX: 881fd9f3eac0 RSI: 810ad2f0 RDI: 
> c900471d3c20
> [ 8114.320503] RBP: 02ccbdb0 R08:  R09: 
> dead0100
> [ 8114.320511] R10: 1093 R11: 881fd3340840 R12: 
> 880101609d80
> [ 8114.320518] R13: 006c R14: 881fd68dbb01 R15: 
> 880101609d80
> [ 8114.320533] FS:  7fd3352a3880() GS:881fdf40() 
> knlGS:
> [ 8114.320541] CS:  e033 DS:  ES:  CR0: 80050033
> [ 8114.320548] CR2: 0008 CR3: 001fd33ca000 CR4: 
> 2660
> [ 8114.320560] Call Trace:
> [ 8114.320569]  gnttab_unmap_refs_sync+0x40/0x60
> [ 8114.320580]  __unmap_grant_pages+0x80/0x140 [xen_gntdev]
> [ 8114.320587]  ? gnttab_unmap_refs_sync+0x60/0x60
> [ 8114.320596]  ? __queue_work+0x3f0/0x3f0
> [ 8114.320602]  ? gnttab_free_pages+0x20/0x20
> [ 8114.320610]  unmap_grant_pages+0x80/0xe0 [xen_gntdev]
> [ 8114.320618]  unmap_if_in_range+0x53/0xa0 [xen_gntdev]
> [ 8114.320626]  mn_invl_range_start+0x4a/0xe0 [xen_gntdev]
> [ 8114.320635]  __mmu_notifier_invalidate_range_start+0x6b/0xe0
> [ 8114.320646]  unmap_vmas+0x71/0x90
> [ 8114.320652]  unmap_region+0x9c/0xf0
> [ 8114.320660]  ? __vma_rb_erase+0x109/0x200
> [ 8114.320666]  do_munmap+0x213/0x390
> [ 8114.320673]  __x64_sys_brk+0x13c/0x1b0
> [ 8114.320682]  do_syscall_64+0x5d/0x110
> [ 8114.320690]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
> 
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] SVM: limit GIF=0 region

2018-09-10 Thread Boris Ostrovsky
On 09/10/2018 11:02 AM, Jan Beulich wrote:
> Use EFLAGS.IF for most ordinary purposes; there's in particular no need
> to unduly defer NMI/#MC. Clear GIF only immediately before VMRUN itself.
> This has the additional advantage that svm_stgi_label now indeed marks
> the only place where GIF gets set.
>
> Note regarding the main STI placement: Quite counterintuitively the
> host's EFLAGS.IF continues to have a meaning while the guest runs; see
> PM Vol 2 section "Physical (INTR) Interrupt Masking in EFLAGS". Hence we
> need to set the flag for the duration of time being in guest context.
> However, SPEC_CTRL_ENTRY_FROM_HVM wants to be carried out with EFLAGS.IF
> clear.
>
> Note regarding the main STGI placement: It could be moved further up,
> but at present SPEC_CTRL_EXIT_TO_HVM is not NMI/#MC-safe.
>
> Suggested-by: Andrew Cooper 
> Signed-off-by: Jan Beulich 

Reviewed-by: Boris Ostrovsky 



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] x86: use VMLOAD for PV context switch

2018-09-10 Thread Boris Ostrovsky
On 09/10/2018 10:03 AM, Jan Beulich wrote:
> Having noticed that VMLOAD alone is about as fast as a single of the
> involved WRMSRs, I thought it might be a reasonable idea to also use it
> for PV. Measurements, however, have shown that an actual improvement can
> be achieved only with an early prefetch of the VMCB (thanks to Andrew
> for suggesting to try this), which I have to admit I can't really
> explain. This way on my Fam15 box context switch takes over 100 clocks
> less on average (the measured values are heavily varying in all cases,
> though).
>
> This is intentionally not using a new hvm_funcs hook: For one, this is
> all about PV, and something similar can hardly be done for VMX.
> Furthermore the indirect to direct call patching that is meant to be
> applied to most hvm_funcs hooks would be ugly to make work with
> functions having more than 6 parameters.
>
> Signed-off-by: Jan Beulich 
> Acked-by: Brian Woods 
> ---
> v2: Re-base.
> ---
> Besides the mentioned oddity with measured performance, I've also
> noticed a significant difference (of at least 150 clocks) between
> measuring immediately around the calls to svm_load_segs() and measuring
> immediately inside the function.
>


>  
> +#ifdef CONFIG_PV
> +bool svm_load_segs(unsigned int ldt_ents, unsigned long ldt_base,
> +   unsigned int fs_sel, unsigned long fs_base,
> +   unsigned int gs_sel, unsigned long gs_base,
> +   unsigned long gs_shadow)
> +{
> +unsigned int cpu = smp_processor_id();
> +struct vmcb_struct *vmcb = per_cpu(host_vmcb_va, cpu);
> +
> +if ( unlikely(!vmcb) )
> +return false;
> +
> +if ( !ldt_base )
> +{
> +asm volatile ( "prefetch %0" :: "m" (vmcb->ldtr) );
> +return true;


Could you explain why this is true? We haven't loaded FS/GS here.

I also couldn't find discussion about prefetch --- why is prefetching
ldtr expected to help?

-boris

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] L1TF, and future work

2018-09-10 Thread Tamas K Lengyel
On Fri, Aug 24, 2018 at 3:16 AM Dario Faggioli  wrote:
>
> On Wed, 2018-08-15 at 14:17 +0100, Andrew Cooper wrote:
> > Hello,
> >
> > Now that the embargo on XSA-273 is up, we can start publicly
> > discussing
> > the remaining work do, because there is plenty to do.  In no
> > particular
> > order...
> >
> >
> > [...]
> >
> > 5) Core-aware scheduling.  At the moment, Xen will schedule arbitrary
> > guest vcpus on arbitrary hyperthreads.  This is bad and wants
> > fixing.
> > I'll defer to Dario for further details.
> >
> Yes. So, basically, making sure that, if we have hyperthreading, only
> vCPUs from one domain are, at any given time, concurrently running on
> the threads of a core, acts as a form of mitigation.
>
> As a reference, check how this is mentioned in L1TF writeups coming
> from other hypervisor's that have (or are introducing) support for this
> already:
>
> Hyper-V:
> https://support.microsoft.com/en-us/help/4457951/windows-server-guidance-to-protect-against-l1-terminal-fault
>
> VMWare:
> https://kb.vmware.com/s/article/55806
>
> (MS' Hyper-V's core-scheduler is also mentioned in one of the Intel's
> documents
> https://www.intel.com/content/www/us/en/architecture-and-technology/l1tf.html
>  )
>
> It's not a *complete* mitigation, and, e.g., the other measures (like
> the L1D flushing on VMEnter) are still required, but it helps
> preventing the issue of a VM being able to read/steal data from another
> VM.
>
> As an example, if we have VM 1 and VM 2, with four vCPUs each, and a
> two core system with hyperthreading, i.e., cpu 0 and cpu 1 are threads
> of core 0, while cpu 2 and cpu 3 are threads of core 2, we want to
> schedule the vCPUs, for instance, like this:
>
> cpu0 <-- d2v3
> cpu1 <-- d2v1
> cpu2 <-- d1v2
> cpu3 <-- d1v0
>
> and not like this:
>
> cpu0 <-- d1v2
> cpu1 <-- d2v3
> ...
>
> Of course, this means that, if only d1v2, from VM 1, is active and
> wants to run, while alle the four vCPUs of VM 2 are active and want to
> run too, we can end up in this situation:
>
> cpu0 <-- d1v2
> cpu1 <-- _idle_
> cpu2 <-- d2v1
> cpu3 <-- d2v3
>
> wanting_to_run: d2v0, d2v2
>
> I.e., there are ready to run vCPUs, there is an idle pCPU, but we can't
> run them there. This is not ideal, but is, at least in theory, better
> than disabling hyperthreading entirely. (Again, these are all just
> examples!)
>
> Of course, this makes the scheduling much more complicated, especially
> when it comes to fairness considerations and to avoiding starvation.
>
> I do have an RFC level patch series, for starting implementing this
> "core-scheduling", which I have shared with someone, during the
> embargo, and that I will post here on xen-devel later.
>
> Note that I'll be off for ~2 weeks, effective next Monday, so feel free
> to comment, reply, etc, but expect me to reply back only in September.

Hi Dario,
once you are back from vacation, could you share the RFC patches you mentioned?

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Updated key wiki documentation

2018-09-10 Thread Marek Marczykowski-Górecki
On Mon, Sep 10, 2018 at 07:04:49PM +0100, Lars Kurth wrote:
> See
> https://wiki.xenproject.org/wiki/Xen_Project_Software_Overview_v2 
> and https://wiki.xenproject.org/wiki/Booting_Overview
> I'd appreciate a look and feedback. 

On Booting_Overview, the diagram for pvgrub is wrong - kernel+initramfs
comes from domU filesystem.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 127470: all pass - PUSHED

2018-09-10 Thread osstest service owner
flight 127470 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127470/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf f4eaaf1a6d50c761e2af9a6dd0976fb8a3bd3c08
baseline version:
 ovmf 4b2dc555d8a67e715d8fafab4c9131791d31a788

Last test of basis   127461  2018-09-10 00:40:52 Z0 days
Testing same since   127470  2018-09-10 07:43:57 Z0 days1 attempts


People who touched revisions under test:
  Jian J Wang 
  Ruiyu Ni 
  Zhang Chao B 
  Zhang, Chao B 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4b2dc555d8..f4eaaf1a6d  f4eaaf1a6d50c761e2af9a6dd0976fb8a3bd3c08 -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [freebsd-master test] 127475: all pass - PUSHED

2018-09-10 Thread osstest service owner
flight 127475 freebsd-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127475/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 freebsd  4e22ee3754200e9ce86c4820d484dfeb94041c56
baseline version:
 freebsd  7b6d891a9f0e73952e081c2c41469cdeba34eea8

Last test of basis   127373  2018-09-07 09:18:57 Z3 days
Testing same since   127475  2018-09-10 09:19:51 Z0 days1 attempts


People who touched revisions under test:
  delphij 
  emaste 
  hselasky 
  kib 
  markj 
  rgrimes 
  tsoome 

jobs:
 build-amd64-freebsd-againpass
 build-amd64-freebsd  pass
 build-amd64-xen-freebsd  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/freebsd.git
   7b6d891a9f0..4e22ee37542  4e22ee3754200e9ce86c4820d484dfeb94041c56 -> 
tested/master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-4.14 test] 127464: tolerable FAIL - PUSHED

2018-09-10 Thread osstest service owner
flight 127464 linux-4.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127464/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail 
in 127453 pass in 127464
 test-amd64-amd64-examine  4 memdisk-try-append fail pass in 127453

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux7fe7a0f4c5cf9e7f5b7cb67c1341cdbf62ed4c30
baseline version:
 linuxee13f7edca5838436feefde90ed1b2ebb07c4184

Last test of basis   127297  2018-09-05 07:46:02 Z5 days
Testing same since   127453  2018-09-09 18:11:08 Z1 days2 attempts


People who touched revisions under test:
  Aditya Kali 
  Adrian Hunter 
  Alexander Aring 
  Alexandre Belloni 
  Alexandru Ardelean 
  Amir Goldstein 
  Andrew Donnellan 
  Andrew Morton 
  Anna Schumaker 
  Arnaldo Carvalho de Melo 
  Bart Van Assche 
  Bartlomiej Zolnierkiewicz 
  Bartosz Golaszewski 
  Benjamin Herrenschmidt 
  Bill Baker 
  Bjorn Helgaas 
  Chanwoo Choi 
  Chirantan Ekbote 
  Chris Wilson 
  

Re: [Xen-devel] [PATCH v2 10/13] optee: add support for RPC commands

2018-09-10 Thread Volodymyr Babchuk

Hi,

On 10.09.18 18:34, Julien Grall wrote:

Hi Volodymyr,

On 03/09/18 17:54, Volodymyr Babchuk wrote:

OP-TEE can issue multiple RPC requests. We are interested mostly in
request that asks NW to allocate/free shared memory for OP-TEE
needs, becuase mediator need to do address translation in the same


s/becuase/because/
s/need/needs/

the mediator



way as it was done for shared buffers registered by NW.

As mediator now accesses shared command buffer, we need to shadow


"As the"


it in the same way, as we shadow request buffers for STD calls.

Signed-off-by: Volodymyr Babchuk 
---
  xen/arch/arm/tee/optee.c | 137 
+++

  1 file changed, 126 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 8bfcfdc..b2d795e 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -45,6 +45,7 @@ struct std_call_ctx {
  struct shm_rpc {
  struct list_head list;
  struct optee_msg_arg *guest_arg;
+    struct optee_msg_arg *xen_arg;
  struct page *guest_page;
  mfn_t guest_mfn;
  uint64_t cookie;
@@ -303,6 +304,10 @@ static struct shm_rpc 
*allocate_and_map_shm_rpc(struct domain_ctx *ctx, paddr_t

  if ( !shm_rpc )
  goto err;
+    shm_rpc->xen_arg = alloc_xenheap_page();
+    if ( !shm_rpc->xen_arg )
+    goto err;
+
  shm_rpc->guest_mfn = lookup_and_pin_guest_ram_addr(gaddr, NULL);
  if ( mfn_eq(shm_rpc->guest_mfn, INVALID_MFN) )
@@ -324,6 +329,10 @@ static struct shm_rpc 
*allocate_and_map_shm_rpc(struct domain_ctx *ctx, paddr_t

  err:
  atomic_dec(>shm_rpc_count);
+
+    if ( shm_rpc->xen_arg )
+    free_xenheap_page(shm_rpc->xen_arg);
+
  xfree(shm_rpc);
  return NULL;
  }
@@ -346,9 +355,10 @@ static void free_shm_rpc(struct domain_ctx *ctx, 
uint64_t cookie)

  }
  spin_unlock(>lock);
-    if ( !found ) {
+    if ( !found )
  return;
-    }


That change should be folded into the patch adding the {}.


+
+    free_xenheap_page(shm_rpc->xen_arg);
  if ( shm_rpc->guest_arg ) {
  unpin_guest_ram_addr(shm_rpc->guest_mfn);
@@ -358,6 +368,24 @@ static void free_shm_rpc(struct domain_ctx *ctx, 
uint64_t cookie)

  xfree(shm_rpc);
  }
+static struct shm_rpc *find_shm_rpc(struct domain_ctx *ctx, uint64_t 
cookie)

+{
+    struct shm_rpc *shm_rpc;
+
+    spin_lock(>lock);
+    list_for_each_entry( shm_rpc, >shm_rpc_list, list )
+    {
+    if ( shm_rpc->cookie == cookie )
+    {
+    spin_unlock(>lock);
+    return shm_rpc;
+    }
+    }
+    spin_unlock(>lock);
+
+    return NULL;
+}
+
  static struct shm_buf *allocate_shm_buf(struct domain_ctx *ctx,
  uint64_t cookie,
  int pages_cnt)
@@ -704,6 +732,28 @@ static bool copy_std_request_back(struct 
domain_ctx *ctx,

  return true;
  }
+static void handle_rpc_return(struct domain_ctx *ctx,
+ struct cpu_user_regs *regs,
+ struct std_call_ctx *call)
+{
+    call->optee_thread_id = get_user_reg(regs, 3);
+    call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(get_user_reg(regs, 0));
+
+    if ( call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD )
+    {
+    /* Copy RPC request from shadowed buffer to guest */
+    uint64_t cookie = get_user_reg(regs, 1) << 32 | 
get_user_reg(regs, 2);

+    struct shm_rpc *shm_rpc = find_shm_rpc(ctx, cookie);


Newline between declaration and code.

Sorry, another habit from kernel coding style :(


+    if ( !shm_rpc )
+    {
+    gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie 
%lx\n", cookie);

+    return;
+    }
+    memcpy(shm_rpc->guest_arg, shm_rpc->xen_arg,
+   OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params));
+    }
+}
+
  static bool execute_std_call(struct domain_ctx *ctx,
   struct cpu_user_regs *regs,
   struct std_call_ctx *call)
@@ -715,8 +765,7 @@ static bool execute_std_call(struct domain_ctx *ctx,
  optee_ret = get_user_reg(regs, 0);
  if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
  {
-    call->optee_thread_id = get_user_reg(regs, 3);
-    call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
+    handle_rpc_return(ctx, regs, call);


It would make sense to introduce handle_rpc_return where you actually 
add those 2 lines.



  return true;
  }
@@ -783,6 +832,74 @@ out:
  return ret;
  }
+
+static void handle_rpc_cmd_alloc(struct domain_ctx *ctx,
+ struct cpu_user_regs *regs,
+ struct std_call_ctx *call,
+ struct shm_rpc *shm_rpc)
+{
+    if ( shm_rpc->xen_arg->params[0].attr != 
(OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT |

+    OPTEE_MSG_ATTR_NONCONTIG) )
+    {
+    

[Xen-devel] Updated key wiki documentation

2018-09-10 Thread Lars Kurth
See
https://wiki.xenproject.org/wiki/Xen_Project_Software_Overview_v2 
and https://wiki.xenproject.org/wiki/Booting_Overview
I'd appreciate a look and feedback. Once in order I will replace
https://wiki.xenproject.org/wiki/Xen_Project_Software_Overview
with https://wiki.xenproject.org/wiki/Xen_Project_Software_Overview_v2 
Lars 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 09/13] optee: add support for arbitrary shared memory

2018-09-10 Thread Volodymyr Babchuk

Hi,

On 10.09.18 17:02, Julien Grall wrote:

Hi,

On 03/09/18 17:54, Volodymyr Babchuk wrote:

Shared memory is widely used by NW to communicate with
TAs in OP-TEE. NW can share part of own memory with
TA or OP-TEE core, by registering it OP-TEE, or by providing
a temporal refernce. Anyways, information about such memory
buffers are sent to OP-TEE as a list of pages. This mechanism
is descripted optee_msg.h.

Mediator should step in when NW tries to share memory with
OP-TEE for two reasons:

1. Do address translation from IPA to PA.
2. Pin domain pages till they are mapped into OP-TEE or TA
    address space, so domain can't transfer this pages to
    other domain or baloon out them.


s/baloon/balloon/



Address translation is done by translate_noncontig(...) function.
It allocates new buffer from xenheap and then walks on guest
provided list of pages, translates addresses and stores PAs into
newly allocated buffer. This buffer will be provided to OP-TEE
instead of original buffer from the guest. This buffer will
be free at the end of sdandard call.

In the same time this function pins pages and stores them in
struct shm_buf object. This object will live all the time,
when given SHM buffer is known to OP-TEE. It will be freed
after guest unregisters shared buffer. At this time pages
will be unpinned.

Signed-off-by: Volodymyr Babchuk 
---
  xen/arch/arm/tee/optee.c | 245 
++-

  1 file changed, 244 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 6d6b51d..8bfcfdc 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -22,6 +22,8 @@
  #define MAX_STD_CALLS   16
  #define MAX_RPC_SHMS    16
+#define MAX_TOTAL_SMH_BUF_PG    16384


So that's 64MB worth of guest memory. Do we expect them to be mapped in 
Xen? Or just pinned?

Just pinned. We are not interested in contents of this memory.


+#define MAX_NONCONTIG_ENTRIES   5
  /*
   * Call context. OP-TEE can issue multiple RPC returns during one call.
@@ -31,6 +33,9 @@ struct std_call_ctx {
  struct list_head list;
  struct optee_msg_arg *guest_arg;
  struct optee_msg_arg *xen_arg;
+    /* Buffer for translated page addresses, shared with OP-TEE */
+    void *non_contig[MAX_NONCONTIG_ENTRIES];
+    int non_contig_order[MAX_NONCONTIG_ENTRIES];


Can you please introduce a structure with the order and mapping?


  mfn_t guest_arg_mfn;
  int optee_thread_id;
  int rpc_op;
@@ -45,13 +50,24 @@ struct shm_rpc {
  uint64_t cookie;
  };
+/* Shared memory buffer for arbitrary data */
+struct shm_buf {
+    struct list_head list;
+    uint64_t cookie;
+    int max_page_cnt;
+    int page_cnt;


AFAICT, max_page_cnt and page_cnt should never but negative. If so, then 
they should be unsigned.



+    struct page_info *pages[];
+};
+
  struct domain_ctx {
  struct list_head list;
  struct list_head call_ctx_list;
  struct list_head shm_rpc_list;
+    struct list_head shm_buf_list;
  struct domain *domain;
  atomic_t call_ctx_count;
  atomic_t shm_rpc_count;
+    atomic_t shm_buf_pages;
  spinlock_t lock;
  };
@@ -158,9 +174,12 @@ static int optee_enable(struct domain *d)
  ctx->domain = d;
  INIT_LIST_HEAD(>call_ctx_list);
  INIT_LIST_HEAD(>shm_rpc_list);
+    INIT_LIST_HEAD(>shm_buf_list);
  atomic_set(>call_ctx_count, 0);
  atomic_set(>shm_rpc_count, 0);
+    atomic_set(>shm_buf_pages, 0);
+
  spin_lock_init(>lock);
  spin_lock(_ctx_list_lock);
@@ -339,12 +358,76 @@ static void free_shm_rpc(struct domain_ctx *ctx, 
uint64_t cookie)

  xfree(shm_rpc);
  }
+static struct shm_buf *allocate_shm_buf(struct domain_ctx *ctx,
+    uint64_t cookie,
+    int pages_cnt)


Ditto.


+{
+    struct shm_buf *shm_buf;
+
+    while(1)
+    {
+    int old = atomic_read(>shm_buf_pages);
+    int new = old + pages_cnt;
+    if ( new >= MAX_TOTAL_SMH_BUF_PG )
+    return NULL;
+    if ( likely(old == atomic_cmpxchg(>shm_buf_pages, old, 
new)) )

+    break;
+    }
+
+    shm_buf = xzalloc_bytes(sizeof(struct shm_buf) +
+    pages_cnt * sizeof(struct page *));
+    if ( !shm_buf ) {


Coding style:

if ( ... )
{


+    atomic_sub(pages_cnt, >shm_buf_pages);
+    return NULL;
+    }
+
+    shm_buf->cookie = cookie;
+    shm_buf->max_page_cnt = pages_cnt;
+
+    spin_lock(>lock);
+    list_add_tail(_buf->list, >shm_buf_list);
+    spin_unlock(>lock);
+
+    return shm_buf;
+}
+
+static void free_shm_buf(struct domain_ctx *ctx, uint64_t cookie)
+{
+    struct shm_buf *shm_buf; > +    bool found = false;
+
+    spin_lock(>lock);
+    list_for_each_entry( shm_buf, >shm_buf_list, list )
+    {
+    if ( shm_buf->cookie == cookie )


What does guarantee you the cookie will be uniq?


+    {
+    found = true;
+    list_del(_buf->list);
+

Re: [Xen-devel] [PATCH v2] xen: add DEBUG_INFO Kconfig symbol

2018-09-10 Thread Andrew Cooper
On 10/09/18 14:29, Jan Beulich wrote:
 On 10.09.18 at 15:21,  wrote:
> On 31.08.18 at 10:43,  wrote:
>> On 31.08.18 at 10:29,  wrote:
 --- a/xen/Kconfig.debug
 +++ b/xen/Kconfig.debug
 @@ -11,6 +11,13 @@ config DEBUG
  
  You probably want to say 'N' here.
  
 +config DEBUG_INFO
 +  bool "Compile Xen with debug info"
 +  default y
 +  ---help---
 +If you say Y here the resulting Xen will include debugging info
 +resulting in a larger binary image.
 +
  if DEBUG || EXPERT = "y"
>>> Perhaps better move your addition into this conditional section?
>> So this was a bad suggestion after all - with DEBUG=n DEBUG_INFO is
>> now implicitly n as well. The section needs to be moved back to where
>> you had it as per above, with the _prompt_ depending on
>> DEBUG || EXPERT="y".
> Furthermore - is COVERAGE without DEBUG_INFO of any use?

Yes - very much so.

From a "how much of my binary does do my tests cover" point of view, you
want the release binary rather than the debug binary.

In some copious free time, I'd like to automate the measurements of "how
much of Xen does the XTF suite cover?"

> Are there
> perhaps any other dependencies (I think/hope live patching logic doesn't
> depend on debug info)?

The livepatch build depends on xen-syms containing all the debug
information, but the runtime logic doesn't, I believe.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 08/13] optee: add support for RPC SHM buffers

2018-09-10 Thread Volodymyr Babchuk

Hi Julien,

On 10.09.18 16:01, Julien Grall wrote:

Hi Volodymyr,

On 03/09/18 17:54, Volodymyr Babchuk wrote:

OP-TEE usually uses the same idea with command buffers (see
previous commit) to issue RPC requests. Problem is that initially
it has no buffer, where it can write request. So the first RPC
request it makes is special: it requests NW to allocate shared
buffer for other RPC requests. Usually this buffer is allocated
only once for every OP-TEE thread and it remains allocated all
the time until shutdown.

Mediator needs to pin this buffer(s) to make sure that domain can't
transfer it to someone else. Also it should be mapped into XEN
address space, because mediator needs to check responses from
guests.


Can you explain why you always need to keep the shared buffer mapped in 
Xen? Why not using access_guest_memory_by_ipa every time you want to get 
information from the guest?

Sorry, I just didn't know about this mechanism. But for performance reasons,
I'd like to keep this buffers always mapped. You see, RPC returns are
very frequent (for every IRQ, actually). So I think, it will be costly
to map/unmap this buffer every time.



Signed-off-by: Volodymyr Babchuk 
---
  xen/arch/arm/tee/optee.c | 121 
++-

  1 file changed, 119 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 1008eba..6d6b51d 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -21,6 +21,7 @@
  #include 
  #define MAX_STD_CALLS   16
+#define MAX_RPC_SHMS    16
  /*
   * Call context. OP-TEE can issue multiple RPC returns during one call.
@@ -35,11 +36,22 @@ struct std_call_ctx {
  int rpc_op;
  };
+/* Pre-allocated SHM buffer for RPC commands */
+struct shm_rpc {
+    struct list_head list;
+    struct optee_msg_arg *guest_arg;
+    struct page *guest_page;
+    mfn_t guest_mfn;
+    uint64_t cookie;
+};
+
  struct domain_ctx {
  struct list_head list;
  struct list_head call_ctx_list;
+    struct list_head shm_rpc_list;
  struct domain *domain;
  atomic_t call_ctx_count;
+    atomic_t shm_rpc_count;
  spinlock_t lock;
  };
@@ -145,8 +157,10 @@ static int optee_enable(struct domain *d)
  ctx->domain = d;
  INIT_LIST_HEAD(>call_ctx_list);
+    INIT_LIST_HEAD(>shm_rpc_list);
  atomic_set(>call_ctx_count, 0);
+    atomic_set(>shm_rpc_count, 0);
  spin_lock_init(>lock);
  spin_lock(_ctx_list_lock);
@@ -256,11 +270,81 @@ static struct std_call_ctx *find_call_ctx(struct 
domain_ctx *ctx, int thread_id)

  return NULL;
  }
+static struct shm_rpc *allocate_and_map_shm_rpc(struct domain_ctx 
*ctx, paddr_t gaddr,


I would prefer if you pass a gfn instead of the address here.


+    uint64_t cookie)


NIT: Indentation


+{
+    struct shm_rpc *shm_rpc;
+    int count;
+
+    count = atomic_add_unless(>shm_rpc_count, 1, MAX_RPC_SHMS);
+    if ( count == MAX_RPC_SHMS )
+    return NULL;
+
+    shm_rpc = xzalloc(struct shm_rpc);
+    if ( !shm_rpc )
+    goto err;
+
+    shm_rpc->guest_mfn = lookup_and_pin_guest_ram_addr(gaddr, NULL);
+
+    if ( mfn_eq(shm_rpc->guest_mfn, INVALID_MFN) )
+    goto err;
+
+    shm_rpc->guest_arg = map_domain_page_global(shm_rpc->guest_mfn);
+    if ( !shm_rpc->guest_arg )
+    {
+    gprintk(XENLOG_INFO, "Could not map domain page\n");


You don't unpin the guest page if Xen can't map the page.


+    goto err;
+    }
+    shm_rpc->cookie = cookie;
+
+    spin_lock(>lock);
+    list_add_tail(_rpc->list, >shm_rpc_list);
+    spin_unlock(>lock);
+
+    return shm_rpc;
+
+err:
+    atomic_dec(>shm_rpc_count);
+    xfree(shm_rpc);
+    return NULL;
+}
+
+static void free_shm_rpc(struct domain_ctx *ctx, uint64_t cookie)
+{
+    struct shm_rpc *shm_rpc;
+    bool found = false;
+
+    spin_lock(>lock);
+
+    list_for_each_entry( shm_rpc, >shm_rpc_list, list )
+    {
+    if ( shm_rpc->cookie == cookie )


What does guarantee you the cookie will be uniq?

Normal World guarantees. This is the part of the protocol.


+    {
+    found = true;
+    list_del(_rpc->list);
+    break;
+    }
+    }
+    spin_unlock(>lock);


At this point you have a shm_rpc in hand to free. But what does 
guarantee you no-one will use it?

This is valid point. I'll revisit this part of the code, thank you.
Looks like I need some refcount there.


+
+    if ( !found ) {
+    return;
+    }


No need for the {} in a one-liner.


+
+    if ( shm_rpc->guest_arg ) {


Coding style:

if ( ... )
{


+    unpin_guest_ram_addr(shm_rpc->guest_mfn);
+    unmap_domain_page_global(shm_rpc->guest_arg);
+    }
+
+    xfree(shm_rpc);
+}
+
  static void optee_domain_destroy(struct domain *d)
  {
  struct arm_smccc_res resp;
  struct domain_ctx *ctx;
  struct std_call_ctx *call, *call_tmp;
+    struct shm_rpc *shm_rpc, *shm_rpc_tmp;
  bool found = false;
  /* At this time all 

Re: [Xen-devel] [PATCH v2 07/13] optee: add std call handling

2018-09-10 Thread Volodymyr Babchuk

Hi Julien,

On 05.09.18 18:17, Julien Grall wrote:

Hi,

On 09/03/2018 05:54 PM, Volodymyr Babchuk wrote:

Main way to communicate with OP-TEE is to issue standard SMCCC


NIT: The main way


call. "Standard" is a SMCCC term and it means that call can be
interrupted and OP-TEE can return control to NW before completing
the call.

In contranst with fast calls, where arguments and return values


NIT: s/contranst/contrast/


are passed in registers, standard calls use shared memory. Register
pair r1,r2 holds 64-bit PA of command buffer, where all arguments


Do you mean w1, w2?

Good question. How to call the registers, so it would be valid both
for ARMv7 and ARMv8?


are stored and which is used to return data. OP-TEE internally
copies contents of this buffer into own secure memory before accessing
and validating any data in command buffer. This is done to make sure
that NW will not change contents of the validated parameters.

Mediator needs to do the same for number of reasons:

1. To make sure that guest will not change data after validation.
2. To translate IPAs to PAs in the command buffer (this is not done
    in this patch).
3. To hide translated address from guest, so it will not be able
    to do IPA->PA translation by misusing mediator.

Also mediator pins the page with original command buffer because
it will write to it later, when returning response from OP-TEE.


I don't think it is necessary to pin the guest command buffer. You can 
use guestcopy helper to copy to/from the guest memory.


If the guest modify the P2M at the same time, then it is not our 
business if something wrong happen. The only things we need to prevent 
here is writing back to an MFN that does not belong to the domain.
You are right. I did this in the same way as  OP-TEE does, except that 
OP-TEE does

not copies whole command buffer. So yes, your approach is better. Thank you.



During standard call OP-TEE can issue multiple "RPC returns", asking
NW to do some work for OP-TEE. NW then issues special call
OPTEE_SMC_CALL_RETURN_FROM_RPC to resume handling of the original call.
Thus, mediator needs to maintain context for original standard call
during multiple SMCCC calls.

Standard call is considered complete, when returned value is
not RPC request.

Signed-off-by: Volodymyr Babchuk 
---
  xen/arch/arm/tee/optee.c | 319 
++-

  1 file changed, 316 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index c895a99..1008eba 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -12,6 +12,7 @@
   */
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -19,9 +20,27 @@
  #include 
  #include 
+#define MAX_STD_CALLS   16


I suspect this is used to restrict the number of calls in flight. If so, 
I would appreciate if this is documented in the code and the limitations 
explained in the commit message.

Yes, you are right. I'll add description.



+
+/*
+ * Call context. OP-TEE can issue multiple RPC returns during one call.
+ * We need to preserve context during them.
+ */
+struct std_call_ctx {
+    struct list_head list;
+    struct optee_msg_arg *guest_arg;
+    struct optee_msg_arg *xen_arg;
+    mfn_t guest_arg_mfn;
+    int optee_thread_id;
+    int rpc_op;
+};
+
  struct domain_ctx {
  struct list_head list;
+    struct list_head call_ctx_list;
  struct domain *domain;
+    atomic_t call_ctx_count;
+    spinlock_t lock;
  };
  static LIST_HEAD(domain_ctx_list);
@@ -49,6 +68,44 @@ static bool optee_probe(void)
  return true;
  }
+static mfn_t lookup_and_pin_guest_ram_addr(paddr_t gaddr,
+    struct page_info **pg)


I don't think there are need to return both an MFN and a page. You can 
deduce easily from the other. In this context, it would be better to 
return a page.


Also, I would prefer if this function take a guest frame address over a 
guest physical address.


Lastly this function is basically a re-implementation of 
get_page_from_gfn with a restriction on the p2m type. Please 
re-implement it using that function.



+{
+    mfn_t mfn;
+    gfn_t gfn;
+    p2m_type_t t;
+    struct page_info *page;
+    struct domain *d = current->domain;
+
+    gfn = gaddr_to_gfn(gaddr);
+    mfn = p2m_lookup(d, gfn, );
+
+    if ( t != p2m_ram_rw || mfn_eq(mfn, INVALID_MFN) )
+    return INVALID_MFN;
+
+    page = mfn_to_page(mfn);


mfn_to_page can never failed. If you want to check whether an MFN is 
valid, then you can use mfn_valid(...).



+    if ( !page )
+    return INVALID_MFN;
+
+    if ( !get_page(page, d) )
+    return INVALID_MFN;
+
+    if ( pg )
+    *pg = page;
+
+    return mfn;
+}
+
+static void unpin_guest_ram_addr(mfn_t mfn)
+{
+    struct page_info *page;
+    page = mfn_to_page(mfn);
+    if ( !page )
+    return;
+
+    put_page(page);
+}
+
  static struct domain_ctx *find_domain_ctx(struct domain* d)
  {
  struct 

[Xen-devel] [xen-unstable test] 127463: trouble: broken/fail/pass

2018-09-10 Thread osstest service owner
flight 127463 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127463/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xen-freebsd  broken
 build-amd64-xen-freebsd   5 host-install(5)broken REGR. vs. 127429

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 127429
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 127429
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 127429
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 127429
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 127429
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 127429
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 127429
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 127429
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 127429
 build-amd64-xen-xsm-freebsd   7 xen-build-freebsdfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  1d069e45f7c2f6b2982797dd32092b300bacafad
baseline version:
 xen  1d069e45f7c2f6b2982797dd32092b300bacafad

Last test of basis   127463  2018-09-10 01:52:54 Z0 days
Testing same since  (not found) 0 attempts

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-arm64

Re: [Xen-devel] [PATCH V3 1/2] Drivers/PCI: Export pcie_has_flr() interface

2018-09-10 Thread Sinan Kaya

On 9/10/2018 5:52 AM, Pasi Kärkkäinen wrote:

Hi,

On Sun, Sep 09, 2018 at 10:33:02PM -0400, Sinan Kaya wrote:

On 9/9/2018 2:59 PM, Pasi Kärkkäinen wrote:

I noticed pcie_has_flr() has been recently exported in upstream Linux:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2d2917f7747805a1f4188672f308d82a8ba01700

Are there more changes / cleanups planned to these interfaces, as mentioned 
last year?

(context: xen-pciback reset/do_flr features upstreaming, which kind of stalled 
last year when pcie_has_flr() wasn't exported at the time)


Exporting pcie_has_flr() is a very simple change which could have been done
by the XEN porting effort.

Maybe, the right question is what is so special about XEN reset?

What feature PCI core is missing to support XEN FLR reset that caused
the effort to stall?



Well one of the reasons probably was because Christoph was planning to 
deprecate the pcie_has_flr() functionality..

https://lists.xen.org/archives/html/xen-devel/2017-12/msg01057.html
https://lists.xen.org/archives/html/xen-devel/2017-12/msg01252.html

But now that pcie_has_flr() is exported and available I guess it's fine to 
continue using it also for xen-pciback :)



Yeah, I would go ahead with the implementation. Refactoring can be done
independently.



Thanks,

-- Pasi





___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 0/5] Misc changes to xentrace_format

2018-09-10 Thread Andrii Anisov
From: Andrii Anisov 

Make xentrace_format more convinient and up to date in usage.

Andrii Anisov (5):
  xentrace_format: print timestamps in nanoseconds
  xentrace_format: switch mhz option to float
  xentrace_format: combine 64-bit LE values from traces
  formats: allign trace record format to the current code
  formats: print time values as decimals

 tools/xentrace/formats |  6 +++---
 tools/xentrace/xentrace_format | 35 +++
 2 files changed, 30 insertions(+), 11 deletions(-)

-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 1/5] xentrace_format: print timestamps in nanoseconds

2018-09-10 Thread Andrii Anisov
From: Andrii Anisov 

Signed-off-by: Andrii Anisov 
---
 tools/xentrace/xentrace_format | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/xentrace/xentrace_format b/tools/xentrace/xentrace_format
index 5ff85ae..323d0c2 100644
--- a/tools/xentrace/xentrace_format
+++ b/tools/xentrace/xentrace_format
@@ -8,7 +8,11 @@ import re, sys, string, signal, struct, os, getopt
 
 def usage():
 print >> sys.stderr, \
-  "Usage: " + sys.argv[0] + """ defs-file
+  "Usage: " + sys.argv[0] + """ [-c mhz] defs-file
+   -c mhz   optional time stamps values generator frequency in
+MHz. If specified, timestamps are shown in ns,
+otherwise in cycles.
+
   Parses trace data in binary format, as output by Xentrace and
   reformats it according to the rules in a file of definitions.  The
   rules in this file should have the format ({ and } show grouping
@@ -223,7 +227,7 @@ while not interrupted:
 last_tsc[cpu] = tsc
 
 if mhz:
-tsc = tsc / (mhz*100.0)
+tsc = tsc * 1000.0 / mhz
 
 args = {'cpu'   : cpu,
 'tsc'   : tsc,
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 4/5] formats: allign trace record format to the current code

2018-09-10 Thread Andrii Anisov
From: Andrii Anisov 

Allign rtds:repl_budget trace record format to the current code.

Signed-off-by: Andrii Anisov 
---
 tools/xentrace/formats | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xentrace/formats b/tools/xentrace/formats
index d6e7e3f..7db6d49 100644
--- a/tools/xentrace/formats
+++ b/tools/xentrace/formats
@@ -75,7 +75,7 @@
 0x00022801  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:tickle[ cpu = 
%(1)d ]
 0x00022802  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:runq_pick [ dom:vcpu 
= 0x%(1)08x, cur_deadline = 0x%(3)08x%(2)08x, cur_budget = 0x%(5)08x%(4)08x ]
 0x00022803  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:burn_budget   [ dom:vcpu 
= 0x%(1)08x, cur_budget = 0x%(3)08x%(2)08x, delta = %(4)d ]
-0x00022804  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:repl_budget   [ dom:vcpu 
= 0x%(1)08x, cur_deadline = 0x%(3)08x%(2)08x, cur_budget = 0x%(5)08x%(4)08x ]
+0x00022804  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:repl_budget   [ dom:vcpu 
= 0x%(1)08x, priority_level = %(2)d, cur_deadline = 0x%(4)08x%(3)08x, 
cur_budget = 0x%(6)08x%(5)08x ]
 0x00022805  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:sched_tasklet
 0x00022806  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:schedule  [ 
cpu[16]:tasklet[8]:idle[4]:tickled[4] = %(1)08x ]
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 3/5] xentrace_format: combine 64-bit LE values from traces

2018-09-10 Thread Andrii Anisov
From: Andrii Anisov 

In order to be able to print possible 64bit LE values from
trace records, precombine possible variants.

Signed-off-by: Andrii Anisov 
---
 tools/xentrace/xentrace_format | 23 +++
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/tools/xentrace/xentrace_format b/tools/xentrace/xentrace_format
index cae7d34..d989924 100644
--- a/tools/xentrace/xentrace_format
+++ b/tools/xentrace/xentrace_format
@@ -26,9 +26,11 @@ def usage():
 will output in hexadecimal and 'o' will output in octal ]
 
   Which correspond to the CPU number, event ID, timestamp counter and
-  the 7 data fields from the trace record.  There should be one such
-  rule for each type of event.
-  
+  the 7 data fields from the trace record. Also combined 64bit LE
+  fields are available. E.g. %(21)d is a decimal representation of a
+  64bit LE value placed as the first element of the trace record.
+  There should be one such rule for each type of event.
+
   Depending on your system and the volume of trace buffer data,
   this script may not be able to keep up with the output of xentrace
   if it is piped directly.  In these circumstances you should have
@@ -185,6 +187,13 @@ while not interrupted:
 break
 (d1, d2, d3, d4, d5, d6, d7) = struct.unpack(D7REC, line)
 
+d21 = (d2 << 32) | (0x & d1)
+d32 = (d3 << 32) | (0x & d2)
+d43 = (d4 << 32) | (0x & d3)
+d54 = (d5 << 32) | (0x & d4)
+d65 = (d6 << 32) | (0x & d5)
+d76 = (d7 << 32) | (0x & d6)
+
 # Event field is 28bit of 'uint32_t' in header, not 'long'.
 event &= 0x0fff
 if event == 0x1f003:
@@ -239,7 +248,13 @@ while not interrupted:
 '4' : d4,
 '5' : d5,
 '6' : d6,
-'7' : d7}
+'7' : d7,
+'21': d21,
+'32': d32,
+'43': d43,
+'54': d54,
+'65': d65,
+'76': d76}
 
 try:
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 5/5] formats: print time values as decimals

2018-09-10 Thread Andrii Anisov
From: Andrii Anisov 

For convinience, print RTDS budget and deadline values as decimals.

Signed-off-by: Andrii Anisov 
---
 tools/xentrace/formats | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/xentrace/formats b/tools/xentrace/formats
index 7db6d49..cf25ab0 100644
--- a/tools/xentrace/formats
+++ b/tools/xentrace/formats
@@ -73,9 +73,9 @@
 0x00022216  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  csched2:runq_cand_chk  [ 
dom:vcpu = 0x%(1)08x ]
 
 0x00022801  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:tickle[ cpu = 
%(1)d ]
-0x00022802  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:runq_pick [ dom:vcpu 
= 0x%(1)08x, cur_deadline = 0x%(3)08x%(2)08x, cur_budget = 0x%(5)08x%(4)08x ]
-0x00022803  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:burn_budget   [ dom:vcpu 
= 0x%(1)08x, cur_budget = 0x%(3)08x%(2)08x, delta = %(4)d ]
-0x00022804  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:repl_budget   [ dom:vcpu 
= 0x%(1)08x, priority_level = %(2)d, cur_deadline = 0x%(4)08x%(3)08x, 
cur_budget = 0x%(6)08x%(5)08x ]
+0x00022802  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:runq_pick [ dom:vcpu 
= 0x%(1)08x, cur_deadline = %(32)20d, cur_budget = %(54)20d ]
+0x00022803  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:burn_budget   [ dom:vcpu 
= 0x%(1)08x, cur_budget = %(32)20d, delta = %(4)d ]
+0x00022804  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:repl_budget   [ dom:vcpu 
= 0x%(1)08x, priority_level = %(2)d, cur_deadline = %(43)20d, cur_budget = 
%(65)20d ]
 0x00022805  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:sched_tasklet
 0x00022806  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:schedule  [ 
cpu[16]:tasklet[8]:idle[4]:tickled[4] = %(1)08x ]
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 2/5] xentrace_format: switch mhz option to float

2018-09-10 Thread Andrii Anisov
From: Andrii Anisov 

In some systems fraction of the MHz might be a meaningful part
of the cycles frequency value. So accept MHz value as float.

Signed-off-by: Andrii Anisov 
---
 tools/xentrace/xentrace_format | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/xentrace/xentrace_format b/tools/xentrace/xentrace_format
index 323d0c2..cae7d34 100644
--- a/tools/xentrace/xentrace_format
+++ b/tools/xentrace/xentrace_format
@@ -11,7 +11,7 @@ def usage():
   "Usage: " + sys.argv[0] + """ [-c mhz] defs-file
-c mhz   optional time stamps values generator frequency in
 MHz. If specified, timestamps are shown in ns,
-otherwise in cycles.
+otherwise in cycles. Accepts float.
 
   Parses trace data in binary format, as output by Xentrace and
   reformats it according to the rules in a file of definitions.  The
@@ -65,7 +65,7 @@ def sighand(x,y):
 
 # Main code
 
-mhz = 0
+mhz = 0.0
 
 if len(sys.argv) < 2:
 usage()
@@ -74,7 +74,7 @@ try:
 opts, arg = getopt.getopt(sys.argv[1:], "c:" )
 
 for opt in opts:
-if opt[0] == '-c' : mhz = int(opt[1])
+if opt[0] == '-c' : mhz = float(opt[1])
 
 except getopt.GetoptError:
 usage()
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] xpti=dom0=false doesn't seem to work on 4.8.4

2018-09-10 Thread Karl Johnson
On Mon, Jul 16, 2018 at 5:39 AM Jan Beulich  wrote:

> >>> On 13.07.18 at 20:54,  wrote:
> > Hello,
> >
> > I'm currently testing last Xen 4.8.4 build for CentOS
> > (http://cbs.centos.org/koji/buildinfo?buildID=23169) and disabling
> > XPTI for dom0 doesn't seem to work:
> >
> > (XEN) Command line: dom0_mem=1792M,max:2048M dom0_max_vcpus=4 cpuinfo
> > com1=115200,8n1 console=com1,vga xpti=dom0=false loglvl=all
> > guest_loglvl=all crashkernel=512M@64M
> >
> > (XEN)   XPTI (64-bit PV only): Dom0 enabled, DomU enabled
> >
> > Bug or wrong syntax?
>
> Bug - see
> https://lists.xenproject.org/archives/html/xen-devel/2018-07/msg01305.html
> .
>
> Alternatively you could use "xpti=no-dom0" or (I think) "xpti=dom0=false,".
>
> Jan
>

Is xpti command line broken again on 4.8? I've upgraded to package
xen-4.8.4.43.ge52ec4b787-1 which is based on latest staging snapshot and
xpti=no-dom0 now seems to disable XPTI for domU:

[root@node-tmp1 ~]# xl dmesg|grep -i xpti
(XEN) Command line: dom0_mem=1792M,max:2048M dom0_max_vcpus=4 cpuinfo
com1=115200,8n1 console=com1,vga xpti=no-dom0 crashkernel=512M@64M
loglvl=all guest_loglvl=all
(XEN)   XPTI (64-bit PV only): Dom0 disabled, DomU disabled

Karl
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4] xen: avoid crash in disable_hotplug_cpu

2018-09-10 Thread Olaf Hering
Am Fri, 7 Sep 2018 12:56:37 -0400
schrieb Boris Ostrovsky :

> I was hoping you'd respond to my question about warning.
> 
> root@haswell> xl vcpu-set 3 0  
> and in the guest
> 
> [root@vm-0238 ~]# [   32.866955] [ cut here ]
> [   32.866963] spinlock on CPU0 exists on IRQ1!
> [   32.866984] WARNING: CPU: 0 PID: 14 at arch/x86/xen/spinlock.c:90
> xen_init_lock_cpu+0xbf/0xd0

This happens to work for me, on X5550. Please send your .config.

Olaf


pgpofLFY4MBHD.pgp
Description: Digitale Signatur von OpenPGP
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 10/13] optee: add support for RPC commands

2018-09-10 Thread Julien Grall

Hi Volodymyr,

On 03/09/18 17:54, Volodymyr Babchuk wrote:

OP-TEE can issue multiple RPC requests. We are interested mostly in
request that asks NW to allocate/free shared memory for OP-TEE
needs, becuase mediator need to do address translation in the same


s/becuase/because/
s/need/needs/

the mediator



way as it was done for shared buffers registered by NW.

As mediator now accesses shared command buffer, we need to shadow


"As the"


it in the same way, as we shadow request buffers for STD calls.

Signed-off-by: Volodymyr Babchuk 
---
  xen/arch/arm/tee/optee.c | 137 +++
  1 file changed, 126 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 8bfcfdc..b2d795e 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -45,6 +45,7 @@ struct std_call_ctx {
  struct shm_rpc {
  struct list_head list;
  struct optee_msg_arg *guest_arg;
+struct optee_msg_arg *xen_arg;
  struct page *guest_page;
  mfn_t guest_mfn;
  uint64_t cookie;
@@ -303,6 +304,10 @@ static struct shm_rpc *allocate_and_map_shm_rpc(struct 
domain_ctx *ctx, paddr_t
  if ( !shm_rpc )
  goto err;
  
+shm_rpc->xen_arg = alloc_xenheap_page();

+if ( !shm_rpc->xen_arg )
+goto err;
+
  shm_rpc->guest_mfn = lookup_and_pin_guest_ram_addr(gaddr, NULL);
  
  if ( mfn_eq(shm_rpc->guest_mfn, INVALID_MFN) )

@@ -324,6 +329,10 @@ static struct shm_rpc *allocate_and_map_shm_rpc(struct 
domain_ctx *ctx, paddr_t
  
  err:

  atomic_dec(>shm_rpc_count);
+
+if ( shm_rpc->xen_arg )
+free_xenheap_page(shm_rpc->xen_arg);
+
  xfree(shm_rpc);
  return NULL;
  }
@@ -346,9 +355,10 @@ static void free_shm_rpc(struct domain_ctx *ctx, uint64_t 
cookie)
  }
  spin_unlock(>lock);
  
-if ( !found ) {

+if ( !found )
  return;
-}


That change should be folded into the patch adding the {}.


+
+free_xenheap_page(shm_rpc->xen_arg);
  
  if ( shm_rpc->guest_arg ) {

  unpin_guest_ram_addr(shm_rpc->guest_mfn);
@@ -358,6 +368,24 @@ static void free_shm_rpc(struct domain_ctx *ctx, uint64_t 
cookie)
  xfree(shm_rpc);
  }
  
+static struct shm_rpc *find_shm_rpc(struct domain_ctx *ctx, uint64_t cookie)

+{
+struct shm_rpc *shm_rpc;
+
+spin_lock(>lock);
+list_for_each_entry( shm_rpc, >shm_rpc_list, list )
+{
+if ( shm_rpc->cookie == cookie )
+{
+spin_unlock(>lock);
+return shm_rpc;
+}
+}
+spin_unlock(>lock);
+
+return NULL;
+}
+
  static struct shm_buf *allocate_shm_buf(struct domain_ctx *ctx,
  uint64_t cookie,
  int pages_cnt)
@@ -704,6 +732,28 @@ static bool copy_std_request_back(struct domain_ctx *ctx,
  return true;
  }
  
+static void handle_rpc_return(struct domain_ctx *ctx,

+ struct cpu_user_regs *regs,
+ struct std_call_ctx *call)
+{
+call->optee_thread_id = get_user_reg(regs, 3);
+call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(get_user_reg(regs, 0));
+
+if ( call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD )
+{
+/* Copy RPC request from shadowed buffer to guest */
+uint64_t cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
+struct shm_rpc *shm_rpc = find_shm_rpc(ctx, cookie);


Newline between declaration and code.


+if ( !shm_rpc )
+{
+gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie %lx\n", 
cookie);
+return;
+}
+memcpy(shm_rpc->guest_arg, shm_rpc->xen_arg,
+   OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params));
+}
+}
+
  static bool execute_std_call(struct domain_ctx *ctx,
   struct cpu_user_regs *regs,
   struct std_call_ctx *call)
@@ -715,8 +765,7 @@ static bool execute_std_call(struct domain_ctx *ctx,
  optee_ret = get_user_reg(regs, 0);
  if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
  {
-call->optee_thread_id = get_user_reg(regs, 3);
-call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
+handle_rpc_return(ctx, regs, call);


It would make sense to introduce handle_rpc_return where you actually 
add those 2 lines.



  return true;
  }
  
@@ -783,6 +832,74 @@ out:

  return ret;
  }
  
+

+static void handle_rpc_cmd_alloc(struct domain_ctx *ctx,
+ struct cpu_user_regs *regs,
+ struct std_call_ctx *call,
+ struct shm_rpc *shm_rpc)
+{
+if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT |
+OPTEE_MSG_ATTR_NONCONTIG) )
+{
+gprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer\n");
+return;
+

Re: [Xen-devel] v4.19-rc3, wrong pageflags in dom0

2018-09-10 Thread Olaf Hering
Am Mon, 10 Sep 2018 14:49:07 +0200
schrieb Olaf Hering :

> After reboot I tried to start my HVM domU, this is what I get in dom0:
> [  223.019451] page:ea007bed9040 count:1 mapcount:-1 
> mapping: index:0x0

this also happens with rc2 and xen.git#staging as dom0. v4.18 works.

Olaf


pgppd5y7EODwg.pgp
Description: Digitale Signatur von OpenPGP
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] SVM: limit GIF=0 region

2018-09-10 Thread Jan Beulich
Use EFLAGS.IF for most ordinary purposes; there's in particular no need
to unduly defer NMI/#MC. Clear GIF only immediately before VMRUN itself.
This has the additional advantage that svm_stgi_label now indeed marks
the only place where GIF gets set.

Note regarding the main STI placement: Quite counterintuitively the
host's EFLAGS.IF continues to have a meaning while the guest runs; see
PM Vol 2 section "Physical (INTR) Interrupt Masking in EFLAGS". Hence we
need to set the flag for the duration of time being in guest context.
However, SPEC_CTRL_ENTRY_FROM_HVM wants to be carried out with EFLAGS.IF
clear.

Note regarding the main STGI placement: It could be moved further up,
but at present SPEC_CTRL_EXIT_TO_HVM is not NMI/#MC-safe.

Suggested-by: Andrew Cooper 
Signed-off-by: Jan Beulich 
---
v3: Keep main STGI at its current place, and explain why that is (I'm
sorry Boris, had to drop your R-b yet another time).
v2: Add CLI after VMRUN. Adjust description.

--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -43,7 +43,7 @@ ENTRY(svm_asm_do_resume)
 lea  irq_stat+IRQSTAT_softirq_pending(%rip),%rdx
 xor  %ecx,%ecx
 shl  $IRQSTAT_shift,%eax
-CLGI
+cli
 cmp  %ecx,(%rdx,%rax,1)
 jne  .Lsvm_process_softirqs
 
@@ -57,7 +57,7 @@ UNLIKELY_START(ne, nsvm_hap)
  * Someone shot down our nested p2m table; go round again
  * and nsvm_vcpu_switch() will fix it for us.
  */
-STGI
+sti
 jmp  .Lsvm_do_resume
 __UNLIKELY_END(nsvm_hap)
 
@@ -87,6 +87,8 @@ __UNLIKELY_END(nsvm_hap)
 pop  %rsi
 pop  %rdi
 
+CLGI
+sti
 VMRUN
 
 SAVE_ALL
@@ -103,6 +105,6 @@ GLOBAL(svm_stgi_label)
 jmp  .Lsvm_do_resume
 
 .Lsvm_process_softirqs:
-STGI
+sti
 call do_softirq
 jmp  .Lsvm_do_resume





___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [DRBD-user] [PATCH] xen-blkback: Switch to closed state after releasing the backing device

2018-09-10 Thread Roger Pau Monné
On Mon, Sep 10, 2018 at 03:22:52PM +0200, Valentin Vidic wrote:
> On Mon, Sep 10, 2018 at 02:45:31PM +0200, Lars Ellenberg wrote:
> > On Sat, Sep 08, 2018 at 09:34:32AM +0200, Valentin Vidic wrote:
> > > On Fri, Sep 07, 2018 at 07:14:59PM +0200, Valentin Vidic wrote:
> > > > In fact the first one is the original code path before I modified
> > > > blkback.  The problem is it gets executed async from workqueue so
> > > > it might not always run before the call to drbdadm secondary.
> > > 
> > > As the DRBD device gets released only when the last IO request
> > > has finished, I found a way to check and wait for this in the
> > > block-drbd script:
> > 
> > > --- block-drbd.orig 2018-09-08 09:07:23.499648515 +0200
> > > +++ block-drbd  2018-09-08 09:28:12.892193649 +0200
> > > @@ -230,6 +230,24 @@
> > >  and so cannot be mounted ${m2}${when}."
> > >  }
> > >  
> > > +wait_for_inflight()
> > > +{
> > > +  local dev="$1"
> > > +  local inflight="/sys/block/${dev#/dev/}/inflight"
> > > +  local rd wr
> > > +
> > > +  if ! [ -f "$inflight" ]; then
> > > +return
> > > +  fi
> > > +
> > > +  while true; do
> > > +read rd wr < $inflight
> > > +if [ "$rd" = "0" -a "$wr" = "0" ]; then
> > 
> > If it is "idle" now, but still "open",
> > this will not sleep, and still fail the demotion below.
> 
> True, but in this case blkback is holding it open until all
> the writes have finished and the last write closes the device.
> Since fuser can't check blkback this is an approximation that
> seems to work because I don't get any failed drbdadm calls now.
> 
> > You try to help it by "waiting forever until it appears to be idle".
> > I suggest to at least limit the retries by iteration or time.
> > And also (or, instead; but you'd potentially get a number of
> > "scary messages" in the logs) add something like:
> 
> Ok, should I open a PR to discuss this change further?
> 
> > Or, well, yes, fix blkback to not "defer" the final close "too long",
> > if at all possible.
> 
> blkback needs to finish the writes on shutdown or I get a fsck errors
> on next boot. Ideally XenbusStateClosed should be delayed until the
> device release but currently it does not seem possible without breaking
> other things.

I can try to take a look at this and attempt to make sure the state is
only changed to closed in blkback _after_ the device has been
released, but it might take me a couple of days to get you a patch.

I'm afraid that other hotplug scripts will also have issues with such
behavior, and we shouldn't force all users of hotplug scripts to add
such workarounds.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 4/5] x86/hvm: Handle x2apic MSRs via the new guest_{rd, wr}msr() infrastructure

2018-09-10 Thread Roger Pau Monné
On Wed, Mar 07, 2018 at 06:58:35PM +, Andrew Cooper wrote:
> Dispatch from the guest_{rd,wr}msr() functions.  The read side should be safe
> outside of current context, but the write side is definitely not.  As the
> toolstack has no legitimate reason to access the APIC registers via this
> interface (not least because whether they are accessible at all depends on
> guest settings), unilaterally reject access attempts outside of current
> context.
> 
> Rename to guest_{rd,wr}msr_x2apic() for consistency, and alter the functions
> to use X86EMUL_EXCEPTION rather than X86EMUL_UNHANDLEABLE.  The previous
> callers turned UNHANDLEABLE into EXCEPTION, but using UNHANDLEABLE will now
> interfere with the fallback to legacy MSR handling.
> 
> While altering guest_rdmsr_x2apic() make a couple of minor improvements.
> Reformat the initialiser for readable[] so it indents in a more natural way,
> and alter high to be a 64bit integer to avoid shifting 0 by 32 in the common
> path.
> 
> Observant people might notice that we now don't let PV guests read the x2apic
> MSRs.  They should never have been able to in the first place.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Roger Pau Monné 

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 3/5] x86: Fix APIC MSR constant names

2018-09-10 Thread Roger Pau Monné
On Wed, Mar 07, 2018 at 06:58:34PM +, Andrew Cooper wrote:
> We currently have MSR_IA32_APICBASE and MSR_IA32_APICBASE_MSR which are
> synonymous from a naming point of view, but refer to very different things.
> 
> Rename the x2APIC MSRs to MSR_X2APIC_*, which are shorter constants and
> visually separate the register function from the generic APIC name.  For the
> case ranges, introduce MSR_X2APIC_LAST, rather than relying on the knowledge
> that there are 0x3ff MSRs architecturally reserved for x2APIC functionality.
> 
> For functionality relating to the APIC_BASE MSR, use MSR_APIC_BASE for the MSR
> itself, but drop the MSR prefix from the other constants to shorten the names.
> In all cases, the fact that we are dealing with the APIC_BASE MSR is obvious
> from the context.
> 
> No functional change (the combined binary is identical).
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Roger Pau Monné 

> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index 2b4014c..07f2209 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -312,18 +312,21 @@
>  
>  #define MSR_IA32_TSC_ADJUST  0x003b
>  
> -#define MSR_IA32_APICBASE0x001b
> -#define MSR_IA32_APICBASE_BSP(1<<8)
> -#define MSR_IA32_APICBASE_EXTD   (1<<10)
> -#define MSR_IA32_APICBASE_ENABLE (1<<11)
> -#define MSR_IA32_APICBASE_BASE   0x000ff000ul
> -#define MSR_IA32_APICBASE_MSR   0x800
> -#define MSR_IA32_APICTPR_MSR0x808
> -#define MSR_IA32_APICPPR_MSR0x80a
> -#define MSR_IA32_APICEOI_MSR0x80b
> -#define MSR_IA32_APICTMICT_MSR  0x838
> -#define MSR_IA32_APICTMCCT_MSR  0x839
> -#define MSR_IA32_APICSELF_MSR   0x83f
> +#define MSR_APIC_BASE   0x001b
> +#define APIC_BASE_BSP   (1<<8)
> +#define APIC_BASE_EXTD  (1<<10)
> +#define APIC_BASE_ENABLE(1<<11)
> +#define APIC_BASE_BASE  0x000ff000ul

Maybe those could be indented like:

#define MSR_FOO
#define  FOO_BAR

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf baseline-only test] 75190: trouble: blocked/broken

2018-09-10 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 75190 ovmf real [real]
http://osstest.xensource.com/osstest/logs/75190/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm  broken
 build-i386   broken
 build-amd64-pvopsbroken
 build-i386-xsm   broken
 build-amd64  broken
 build-i386-pvops broken

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 build-i3864 host-install(4)   broken baseline untested
 build-i386-pvops  4 host-install(4)   broken baseline untested
 build-i386-xsm4 host-install(4)   broken baseline untested
 build-amd64-pvops 4 host-install(4)   broken baseline untested
 build-amd64-xsm   4 host-install(4)   broken baseline untested
 build-amd64   4 host-install(4)   broken baseline untested

version targeted for testing:
 ovmf 4b2dc555d8a67e715d8fafab4c9131791d31a788
baseline version:
 ovmf 40a7b235e4359b4e2eb4d379d1c543b9cae11346

Last test of basis75178  2018-09-08 03:51:35 Z2 days
Testing same since75190  2018-09-10 07:50:14 Z0 days1 attempts


People who touched revisions under test:
  Fu Siyuan 

jobs:
 build-amd64-xsm  broken  
 build-i386-xsm   broken  
 build-amd64  broken  
 build-i386   broken  
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopsbroken  
 build-i386-pvops broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xensource.com/osstest/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary

broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-amd64-pvops broken
broken-job build-i386-xsm broken
broken-job build-amd64 broken
broken-job build-i386-pvops broken
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Push not applicable.


commit 4b2dc555d8a67e715d8fafab4c9131791d31a788
Author: Fu Siyuan 
Date:   Fri Sep 7 16:47:30 2018 +0800

ShellPkg: Remove trailing white space

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1158

Cc: Ruiyu Ni 
Cc: Jaben Carsey 
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Fu Siyuan 
Reviewed-by: Ruiyu Ni 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/5] x86/hvm: Switch hvm_allow_set_param() to use a whitelist

2018-09-10 Thread Roger Pau Monné
On Fri, Sep 07, 2018 at 07:13:08PM +0100, Andrew Cooper wrote:
> On 07/09/18 17:01, Roger Pau Monné wrote:
> > On Wed, Sep 05, 2018 at 07:12:01PM +0100, Andrew Cooper wrote:
> >> There are holes in the HVM_PARAM space, some of which are from deprecated
> >> parameters, but toolstack and device models currently has (almost) blanket
> >> write access.
> >>
> >> Rearrange hvm_allow_get_param() to have a whitelist of toolstack-writeable
> >> parameters, with the default case failing with -EINVAL.  This subsumes the
> >> HVM_NR_PARAMS check, as well as the MEMORY_EVENT_* deprecated block, and 
> >> the
> >> BUFIOREQ_EVTCHN Xen-write-only value.
> >>
> >> No expected change for the defined, in-use params.
> >>
> >> Signed-off-by: Andrew Cooper 
> > Reviewed-by: Roger Pau Monné 
> >
> >> ---
> >> CC: Jan Beulich 
> >> CC: Wei Liu 
> >> CC: Roger Pau Monné 
> >> CC: Paul Durrant 
> >> CC: Stefano Stabellini 
> >> CC: Julien Grall 
> >> ---
> >>  xen/arch/x86/hvm/hvm.c | 53 
> >> +-
> >>  1 file changed, 31 insertions(+), 22 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >> index 96a6323..d19ae35 100644
> >> --- a/xen/arch/x86/hvm/hvm.c
> >> +++ b/xen/arch/x86/hvm/hvm.c
> >> @@ -4073,7 +4073,7 @@ static int hvm_allow_set_param(struct domain *d,
> >>  
> >>  switch ( a->index )
> >>  {
> >> -/* The following parameters can be set by the guest. */
> >> +/* The following parameters can be set by the guest and 
> >> toolstack. */
> >>  case HVM_PARAM_CALLBACK_IRQ:
> >>  case HVM_PARAM_VM86_TSS:
> > Note sure about the point of letting the guest set the unreal mode
> > TSS, but anyway this is not the scope of the patch.
> 
> Because hvmloader still sets it up for HVM guests.
> 
> Neither you nor Jan took my hints (when doing various related work) that
> unifying the PVH and HVM paths in the domain builder (alongside
> IDENT_PT) would be a GoodThing(tm).
> 
> OTOH, we do now actually have a fairly simple cleanup task which a
> student could be guided through doing, which would allow us to remove
> guest access to these two params.

Hm, right. The main problem I see with this is that the hypervisor has
no knowledge of the memory map when building a DomU (all this is in
the toolstack), so it's quite hard to figure out where to place the
TSS or the identity page tables.

We could make the special pages addresses somehow part of the public
headers, so that there's a fixed address known by both the toolstack
and the hypervisor about the location of those magic pages.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 11/13] x86/domctl: Use hvm_save_vcpu_handler

2018-09-10 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V17:
- Remove double ;
- Move struct vcpu *v to reduce scope
- Remove stray lines.
---
 xen/arch/x86/hvm/save.c | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 870042b27f..e059ab4e13 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
 unsigned int i;
 
 if ( d->is_dying )
@@ -223,8 +222,27 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+hvm_save_handler handler = hvm_sr_handlers[i].save;
+
+if ( save_one_handler )
+{
+struct vcpu *v;
+
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+if ( save_one_handler(v, h) != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -ENODATA;
+}
+}
+}
+else if ( handler )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
@@ -233,7 +251,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
d->domain_id, i);
-return -EFAULT;
+return -ENODATA;
 }
 }
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 10/13] x86/hvm: Add handler for save_one funcs

2018-09-10 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to 
hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/emul-i8254.c  | 2 +-
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 7 +--
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 8 
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 31 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index c2b2b6623c..71afc06f9a 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -397,6 +397,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index 7f1ded2623..a85dfcccbc 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -438,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index cbd1efbc9f..4d8f6da2d9 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -695,7 +695,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1669957f1c..58c03bed15 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -776,6 +776,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1156,8 +1157,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
-  1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt, 1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
save_area) + \
@@ -1508,6 +1509,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1520,6 +1522,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index fe2c2fa06c..9502bae645 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -773,9 +773,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index f3dd972b4a..2ddf5074cb 

[Xen-devel] [PATCH v20 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-10 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V19:
- Replace d->vcpu[instance] with local variable v.
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 10 ++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 797841e803..2284128e93 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -599,12 +599,10 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 96e77c9e4a..f06c0b31c1 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -156,6 +156,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( !ctxt.data )
 return -ENOMEM;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_pause(v);
+else
+domain_pause(d);
+
 if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -187,6 +192,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 }
 }
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_unpause(v);
+else
+domain_unpause(d);
+
 xfree(ctxt.data);
 return rv;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs and adapts print messages in order to match the format of the
other save related messages.

Signed-off-by: Alexandru Isaila 

---
Changes since V19:
- Move v initialization after bound check
- Moved the conditional expression inside the square brackets.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
 xen/arch/x86/emul-i8254.c  |  5 ++-
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 75 +++---
 xen/arch/x86/hvm/irq.c | 15 ---
 xen/arch/x86/hvm/mtrr.c| 22 ++
 xen/arch/x86/hvm/pmtimer.c |  5 ++-
 xen/arch/x86/hvm/rtc.c |  5 ++-
 xen/arch/x86/hvm/save.c| 29 +++--
 xen/arch/x86/hvm/vioapic.c |  5 ++-
 xen/arch/x86/hvm/viridian.c| 23 ++-
 xen/arch/x86/hvm/vlapic.c  | 38 ++---
 xen/arch/x86/hvm/vpic.c|  5 ++-
 xen/include/asm-x86/hvm/save.h |  8 +---
 14 files changed, 64 insertions(+), 196 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 71afc06f9a..f15835e9f6 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -350,7 +350,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 }
 
 #if CONFIG_HVM
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -362,21 +362,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -397,7 +382,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index a85dfcccbc..73be4188ad 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -391,8 +391,9 @@ void pit_stop_channel0_irq(PITState *pit)
 spin_unlock(>lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 PITState *pit = domain_vpit(d);
 int rc;
 
@@ -438,7 +439,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 4d8f6da2d9..be371ecc0b 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -570,16 +570,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+const struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -695,7 +696,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 58c03bed15..43145586c5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,7 +731,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm.msr_tsc_adjust,
@@ -740,21 +740,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, 

[Xen-devel] [PATCH v20 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index a23d0876c4..2df0127a46 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1030,24 +1030,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c198c9190a..b0cf3a836f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,16 +731,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1013b6ecc4..1669957f1c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1339,69 +1339,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila i
Reviewed-by: Jan Beulich 

---
Changes since v16:
- Address style comments.
---
 xen/arch/x86/hvm/mtrr.c | 80 ++---
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index de1b5c4614..f3dd972b4a 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -690,52 +690,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+ sizeof(mtrr_state->fixed_ranges));
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+   sizeof(hw_mtrr.msr_mtrr_fixed));
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 00/13] x86/domctl: Save info for one vcpu instance

2018-09-10 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

NOTE: Tested with tools/misc/xen-hvmctx, tools/xentrace/xenctx, xl save/restore,
custom hvm_getcontext/partial code and debug the getcontext part for guest boot.

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b0cf3a836f..e1133f64d7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -778,119 +778,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct segment_register seg;
+struct hvm_hw_cpu ctxt = {
+.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm.sync_tsc),
+.msr_tsc_aux = hvm_msr_tsc_aux(v),
+.rax = v->arch.user_regs.rax,
+.rbx = v->arch.user_regs.rbx,
+.rcx = v->arch.user_regs.rcx,
+.rdx = v->arch.user_regs.rdx,
+.rbp = v->arch.user_regs.rbp,
+.rsi = v->arch.user_regs.rsi,
+.rdi = v->arch.user_regs.rdi,
+.rsp = v->arch.user_regs.rsp,
+.rip = v->arch.user_regs.rip,
+.rflags = v->arch.user_regs.rflags,
+.r8  = v->arch.user_regs.r8,
+.r9  = v->arch.user_regs.r9,
+.r10 = v->arch.user_regs.r10,
+.r11 = v->arch.user_regs.r11,
+.r12 = v->arch.user_regs.r12,
+.r13 = v->arch.user_regs.r13,
+.r14 = v->arch.user_regs.r14,
+.r15 = v->arch.user_regs.r15,
+.dr0 = v->arch.debugreg[0],
+.dr1 = v->arch.debugreg[1],
+.dr2 = v->arch.debugreg[2],
+.dr3 = v->arch.debugreg[3],
+.dr6 = v->arch.debugreg[6],
+.dr7 = v->arch.debugreg[7],
+};
+
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
+{
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
+
+return hvm_save_entry(CPU, v->vcpu_id, h, );
+}
+
 static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_hw_cpu ctxt;
-struct segment_register seg;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
-if ( v->pause_flags & VPF_down )
-continue;
-
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-ctxt.idtr_limit = seg.limit;
-ctxt.idtr_base = seg.base;
-
-hvm_get_segment_register(v, 

[Xen-devel] [PATCH v20 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 ++
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e1133f64d7..1013b6ecc4 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1163,35 +1163,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
-for_each_vcpu ( d, v )
-{
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
 
-if ( !xsave_enabled(v) )
-continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
-}
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
 
 return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
+}
+
+return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 08/13] x86/hvm: Introduce lapic_save_hidden_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 04702e96c9..31c7a66d01 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1399,23 +1399,27 @@ static void lapic_rearm(struct vlapic *s)
 s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+return hvm_save_entry(LAPIC, v->vcpu_id, h, _vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, >hw)) != 0 )
+err = lapic_save_hidden_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 09/13] x86/hvm: Introduce lapic_save_regs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++---
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 31c7a66d01..8b2955365f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1422,26 +1422,30 @@ static int lapic_save_hidden(struct domain *d, 
hvm_domain_context_t *h)
 return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+if ( hvm_funcs.sync_pir_to_irr )
+hvm_funcs.sync_pir_to_irr(v);
+
+return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-if ( hvm_funcs.sync_pir_to_irr )
-hvm_funcs.sync_pir_to_irr(v);
-
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+err = lapic_save_regs_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 /*
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-linus test] 127458: regressions - FAIL

2018-09-10 Thread osstest service owner
flight 127458 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127458/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 125898
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 125898
 test-amd64-i386-rumprun-i386 12 guest-start  fail REGR. vs. 125898
 test-amd64-amd64-rumprun-amd64 12 guest-startfail REGR. vs. 125898
 test-amd64-i386-xl-shadow12 guest-start  fail REGR. vs. 125898
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 125898
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 125898
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
125898
 test-amd64-amd64-libvirt-vhd 10 debian-di-installfail REGR. vs. 125898
 test-amd64-amd64-pair21 guest-start/debian   fail REGR. vs. 125898
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 125898
 test-amd64-amd64-libvirt 12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 125898
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 125898
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. 
vs. 125898
 test-amd64-amd64-xl-pvshim   12 guest-start  fail REGR. vs. 125898
 test-amd64-amd64-libvirt-xsm 12 guest-start  fail REGR. vs. 125898
 test-amd64-amd64-xl-shadow   12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-freebsd10-i386 11 guest-startfail REGR. vs. 125898
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 125898
 test-amd64-i386-xl-xsm   12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-xl   12 guest-start  fail REGR. vs. 125898
 test-amd64-amd64-xl-credit2  12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install fail REGR. vs. 125898
 test-amd64-amd64-libvirt-pair 21 guest-start/debian  fail REGR. vs. 125898
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 125898
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 125898
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 125898
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
125898
 test-amd64-amd64-xl-pvhv2-intel  7 xen-boot  fail REGR. vs. 125898
 test-amd64-amd64-xl-multivcpu  7 xen-bootfail REGR. vs. 125898
 test-amd64-amd64-xl-qemuu-win7-amd64  7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-xl-xsm  12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 125898
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 125898
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
125898
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 125898
 test-amd64-i386-libvirt  12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-xl-qemut-win10-i386  7 xen-boot  fail REGR. vs. 125898
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. 
vs. 125898
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 125898
 test-amd64-i386-libvirt-pair 21 guest-start/debian   fail REGR. vs. 125898
 test-amd64-amd64-xl  12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-libvirt-xsm  12 guest-start  fail REGR. vs. 125898
 test-amd64-i386-qemut-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 125898
 test-amd64-i386-pair 21 guest-start/debian   fail REGR. vs. 125898
 test-amd64-i386-freebsd10-amd64 11 guest-start   fail REGR. vs. 125898
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail REGR. vs. 125898
 test-amd64-i386-qemut-rhel6hvm-amd 10 redhat-install fail REGR. vs. 125898
 test-amd64-amd64-xl-qemut-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
125898
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. 
vs. 125898
 test-amd64-amd64-examine  8 reboot   fail REGR. vs. 125898
 test-amd64-amd64-amd64-pvgrub 10 debian-di-install   fail REGR. vs. 125898
 test-amd64-i386-xl-raw   10 debian-di-installfail REGR. vs. 125898
 test-amd64-amd64-i386-pvgrub 10 debian-di-installfail REGR. vs. 

Re: [Xen-devel] [PATCH v19 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Jan Beulich
>>> On 10.09.18 at 15:54,  wrote:
> On Mon, 2018-09-10 at 07:42 -0600, Jan Beulich wrote:
>> > > > On 10.09.18 at 15:33,  wrote:
>> > 
>> > On Mon, 2018-09-10 at 15:36 +0300, Alexandru Isaila wrote:
>> > > This patch removes the redundant save functions and renames the
>> > > save_one* to save. It then changes the domain param to vcpu in
>> > > the
>> > > save funcs and adapts print messages in order to match the format
>> > > of
>> > > the
>> > > other save related messages.
>> > > 
>> > > Signed-off-by: Alexandru Isaila 
>> > > 
>> > > ---
>> > > Changes since V18:
>> > >  - Add const struct domain to rtc_save and hpet_save
>> > >  - Latched the vCPU into a local variable in hvm_save_one()
>> > >  - Add HVMSR_PER_VCPU kind check to the bounds if.
>> > > ---
>> > >  xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
>> > >  xen/arch/x86/emul-i8254.c  |  5 ++-
>> > >  xen/arch/x86/hvm/hpet.c|  7 ++--
>> > >  xen/arch/x86/hvm/hvm.c | 75 +++-
>> > > 
>> > > --
>> > >  xen/arch/x86/hvm/irq.c | 15 ---
>> > >  xen/arch/x86/hvm/mtrr.c| 22 ++
>> > >  xen/arch/x86/hvm/pmtimer.c |  5 ++-
>> > >  xen/arch/x86/hvm/rtc.c |  5 ++-
>> > >  xen/arch/x86/hvm/save.c| 28 +++--
>> > >  xen/arch/x86/hvm/vioapic.c |  5 ++-
>> > >  xen/arch/x86/hvm/viridian.c| 23 ++-
>> > >  xen/arch/x86/hvm/vlapic.c  | 38 ++---
>> > >  xen/arch/x86/hvm/vpic.c|  5 ++-
>> > >  xen/include/asm-x86/hvm/save.h |  8 +---
>> > >  14 files changed, 63 insertions(+), 196 deletions(-)
>> > > 
>> > > @@ -141,6 +138,8 @@ int hvm_save_one(struct domain *d, unsigned
>> > > int
>> > > typecode, unsigned int instance,
>> > >  int rv;
>> > >  hvm_domain_context_t ctxt = { };
>> > >  const struct hvm_save_descriptor *desc;
>> > > +struct vcpu *v = (hvm_sr_handlers[typecode].kind ==
>> > > HVMSR_PER_VCPU) ?
>> > > + d->vcpu[instance] : d->vcpu[0];
>> > >  
>> > 
>> > Sorry for the inconvenience but I've just realized that this has to
>> > be
>> > initialize after the bounds check. I will have this in mine
>> 
>> Also to eliminate redundancy I'd prefer if you moved the conditional
>> expression inside the square brackets.
>> 
> Are these changes worth waiting 24h?

That's up to you in this case, I'd say.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2] x86: use VMLOAD for PV context switch

2018-09-10 Thread Jan Beulich
Having noticed that VMLOAD alone is about as fast as a single of the
involved WRMSRs, I thought it might be a reasonable idea to also use it
for PV. Measurements, however, have shown that an actual improvement can
be achieved only with an early prefetch of the VMCB (thanks to Andrew
for suggesting to try this), which I have to admit I can't really
explain. This way on my Fam15 box context switch takes over 100 clocks
less on average (the measured values are heavily varying in all cases,
though).

This is intentionally not using a new hvm_funcs hook: For one, this is
all about PV, and something similar can hardly be done for VMX.
Furthermore the indirect to direct call patching that is meant to be
applied to most hvm_funcs hooks would be ugly to make work with
functions having more than 6 parameters.

Signed-off-by: Jan Beulich 
Acked-by: Brian Woods 
---
v2: Re-base.
---
Besides the mentioned oddity with measured performance, I've also
noticed a significant difference (of at least 150 clocks) between
measuring immediately around the calls to svm_load_segs() and measuring
immediately inside the function.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -52,6 +52,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1281,11 +1282,35 @@ static void load_segments(struct vcpu *n
 struct cpu_user_regs *uregs = >arch.user_regs;
 int all_segs_okay = 1;
 unsigned int dirty_segment_mask, cpu = smp_processor_id();
+bool fs_gs_done = false;
 
 /* Load and clear the dirty segment mask. */
 dirty_segment_mask = per_cpu(dirty_segment_mask, cpu);
 per_cpu(dirty_segment_mask, cpu) = 0;
 
+#ifdef CONFIG_HVM
+if ( !is_pv_32bit_vcpu(n) && !cpu_has_fsgsbase && cpu_has_svm &&
+ !((uregs->fs | uregs->gs) & ~3) &&
+ /*
+  * The remaining part is just for optimization: If only shadow GS
+  * needs loading, there's nothing to be gained here.
+  */
+ (n->arch.pv.fs_base | n->arch.pv.gs_base_user) )
+{
+fs_gs_done = n->arch.flags & TF_kernel_mode
+? svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n),
+uregs->fs, n->arch.pv.fs_base,
+uregs->gs, n->arch.pv.gs_base_kernel,
+n->arch.pv.gs_base_user)
+: svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n),
+uregs->fs, n->arch.pv.fs_base,
+uregs->gs, n->arch.pv.gs_base_user,
+n->arch.pv.gs_base_kernel);
+}
+#endif
+if ( !fs_gs_done )
+load_LDT(n);
+
 /* Either selector != 0 ==> reload. */
 if ( unlikely((dirty_segment_mask & DIRTY_DS) | uregs->ds) )
 {
@@ -1301,7 +1326,7 @@ static void load_segments(struct vcpu *n
 }
 
 /* Either selector != 0 ==> reload. */
-if ( unlikely((dirty_segment_mask & DIRTY_FS) | uregs->fs) )
+if ( unlikely((dirty_segment_mask & DIRTY_FS) | uregs->fs) && !fs_gs_done )
 {
 all_segs_okay &= loadsegment(fs, uregs->fs);
 /* non-nul selector updates fs_base */
@@ -1310,7 +1335,7 @@ static void load_segments(struct vcpu *n
 }
 
 /* Either selector != 0 ==> reload. */
-if ( unlikely((dirty_segment_mask & DIRTY_GS) | uregs->gs) )
+if ( unlikely((dirty_segment_mask & DIRTY_GS) | uregs->gs) && !fs_gs_done  
)
 {
 all_segs_okay &= loadsegment(gs, uregs->gs);
 /* non-nul selector updates gs_base_user */
@@ -1318,7 +1343,7 @@ static void load_segments(struct vcpu *n
 dirty_segment_mask &= ~DIRTY_GS_BASE;
 }
 
-if ( !is_pv_32bit_vcpu(n) )
+if ( !fs_gs_done && !is_pv_32bit_vcpu(n) )
 {
 /* This can only be non-zero if selector is NULL. */
 if ( n->arch.pv.fs_base | (dirty_segment_mask & DIRTY_FS_BASE) )
@@ -1653,6 +1678,12 @@ static void __context_switch(void)
 
 write_ptbase(n);
 
+#if defined(CONFIG_PV) && defined(CONFIG_HVM)
+if ( is_pv_domain(nd) && !is_pv_32bit_domain(nd) && !is_idle_domain(nd) &&
+ !cpu_has_fsgsbase && cpu_has_svm )
+svm_load_segs(0, 0, 0, 0, 0, 0, 0);
+#endif
+
 if ( need_full_gdt(nd) &&
  ((p->vcpu_id != n->vcpu_id) || !need_full_gdt(pd)) )
 {
@@ -1714,10 +1745,7 @@ void context_switch(struct vcpu *prev, s
 local_irq_enable();
 
 if ( is_pv_domain(nextd) )
-{
-load_LDT(next);
 load_segments(next);
-}
 
 ctxt_switch_levelling(next);
 
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -78,6 +78,9 @@ static struct hvm_function_table svm_fun
  */
 static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, hsa);
 static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, host_vmcb);
+#ifdef CONFIG_PV
+static DEFINE_PER_CPU(struct vmcb_struct *, host_vmcb_va);
+#endif
 
 static bool_t amd_erratum383_found __read_mostly;
 
@@ -1567,6 +1570,14 @@ static void svm_cpu_dead(unsigned int cp
 *this_hsa = 

Re: [Xen-devel] [PATCH v2 09/13] optee: add support for arbitrary shared memory

2018-09-10 Thread Julien Grall

Hi,

On 03/09/18 17:54, Volodymyr Babchuk wrote:

Shared memory is widely used by NW to communicate with
TAs in OP-TEE. NW can share part of own memory with
TA or OP-TEE core, by registering it OP-TEE, or by providing
a temporal refernce. Anyways, information about such memory
buffers are sent to OP-TEE as a list of pages. This mechanism
is descripted optee_msg.h.

Mediator should step in when NW tries to share memory with
OP-TEE for two reasons:

1. Do address translation from IPA to PA.
2. Pin domain pages till they are mapped into OP-TEE or TA
address space, so domain can't transfer this pages to
other domain or baloon out them.


s/baloon/balloon/



Address translation is done by translate_noncontig(...) function.
It allocates new buffer from xenheap and then walks on guest
provided list of pages, translates addresses and stores PAs into
newly allocated buffer. This buffer will be provided to OP-TEE
instead of original buffer from the guest. This buffer will
be free at the end of sdandard call.

In the same time this function pins pages and stores them in
struct shm_buf object. This object will live all the time,
when given SHM buffer is known to OP-TEE. It will be freed
after guest unregisters shared buffer. At this time pages
will be unpinned.

Signed-off-by: Volodymyr Babchuk 
---
  xen/arch/arm/tee/optee.c | 245 ++-
  1 file changed, 244 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 6d6b51d..8bfcfdc 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -22,6 +22,8 @@
  
  #define MAX_STD_CALLS   16

  #define MAX_RPC_SHMS16
+#define MAX_TOTAL_SMH_BUF_PG16384


So that's 64MB worth of guest memory. Do we expect them to be mapped in 
Xen? Or just pinned?



+#define MAX_NONCONTIG_ENTRIES   5
  
  /*

   * Call context. OP-TEE can issue multiple RPC returns during one call.
@@ -31,6 +33,9 @@ struct std_call_ctx {
  struct list_head list;
  struct optee_msg_arg *guest_arg;
  struct optee_msg_arg *xen_arg;
+/* Buffer for translated page addresses, shared with OP-TEE */
+void *non_contig[MAX_NONCONTIG_ENTRIES];
+int non_contig_order[MAX_NONCONTIG_ENTRIES];


Can you please introduce a structure with the order and mapping?


  mfn_t guest_arg_mfn;
  int optee_thread_id;
  int rpc_op;
@@ -45,13 +50,24 @@ struct shm_rpc {
  uint64_t cookie;
  };
  
+/* Shared memory buffer for arbitrary data */

+struct shm_buf {
+struct list_head list;
+uint64_t cookie;
+int max_page_cnt;
+int page_cnt;


AFAICT, max_page_cnt and page_cnt should never but negative. If so, then 
they should be unsigned.



+struct page_info *pages[];
+};
+
  struct domain_ctx {
  struct list_head list;
  struct list_head call_ctx_list;
  struct list_head shm_rpc_list;
+struct list_head shm_buf_list;
  struct domain *domain;
  atomic_t call_ctx_count;
  atomic_t shm_rpc_count;
+atomic_t shm_buf_pages;
  spinlock_t lock;
  };
  
@@ -158,9 +174,12 @@ static int optee_enable(struct domain *d)

  ctx->domain = d;
  INIT_LIST_HEAD(>call_ctx_list);
  INIT_LIST_HEAD(>shm_rpc_list);
+INIT_LIST_HEAD(>shm_buf_list);
  
  atomic_set(>call_ctx_count, 0);

  atomic_set(>shm_rpc_count, 0);
+atomic_set(>shm_buf_pages, 0);
+
  spin_lock_init(>lock);
  
  spin_lock(_ctx_list_lock);

@@ -339,12 +358,76 @@ static void free_shm_rpc(struct domain_ctx *ctx, uint64_t 
cookie)
  xfree(shm_rpc);
  }
  
+static struct shm_buf *allocate_shm_buf(struct domain_ctx *ctx,

+uint64_t cookie,
+int pages_cnt)


Ditto.


+{
+struct shm_buf *shm_buf;
+
+while(1)
+{
+int old = atomic_read(>shm_buf_pages);
+int new = old + pages_cnt;
+if ( new >= MAX_TOTAL_SMH_BUF_PG )
+return NULL;
+if ( likely(old == atomic_cmpxchg(>shm_buf_pages, old, new)) )
+break;
+}
+
+shm_buf = xzalloc_bytes(sizeof(struct shm_buf) +
+pages_cnt * sizeof(struct page *));
+if ( !shm_buf ) {


Coding style:

if ( ... )
{


+atomic_sub(pages_cnt, >shm_buf_pages);
+return NULL;
+}
+
+shm_buf->cookie = cookie;
+shm_buf->max_page_cnt = pages_cnt;
+
+spin_lock(>lock);
+list_add_tail(_buf->list, >shm_buf_list);
+spin_unlock(>lock);
+
+return shm_buf;
+}
+
+static void free_shm_buf(struct domain_ctx *ctx, uint64_t cookie)
+{
+struct shm_buf *shm_buf; > +bool found = false;
+
+spin_lock(>lock);
+list_for_each_entry( shm_buf, >shm_buf_list, list )
+{
+if ( shm_buf->cookie == cookie )


What does guarantee you the cookie will be uniq?


+{
+found = true;
+list_del(_buf->list);
+break;
+}
+}
+spin_unlock(>lock);



At this point you have 

Re: [Xen-devel] Xen 4.10.x and PCI passthrough

2018-09-10 Thread Roger Pau Monné
On Fri, Sep 07, 2018 at 07:23:43PM +0200, Andreas Kinzler wrote:
> Hello Roger,
> 
> in August 2017, I reported a problem with PCI passthrough and MSI interrupts
> (https://lists.xenproject.org/archives/html/xen-devel/2017-08/msg01433.html).
> 
> That report lead to some patches for Xen and qemu.
> 
> Some weeks ago I tried a quite new version of Xen 4.10.2-pre 
> (http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=a645331a9f4190e92ccf41a950bc4692f8904239)
> and the PCI card (LSI SAS HBA) using Windows 2012 R2 as a guest. Everything
> works but only to the point where Windows reboots -> then the card is no
> longer usable. If you destroy the domain and recreate the card again works.
> 
> Did I miss something simple or should we analyze the problem again using
> similar debug prints as before?

Not sure, but it doesn't look to me like this issue is related to the
one fixed by the patches mentioned above, I think this is a different
issue, and by the looks of it it's a toolstack issue.

Can you paste the output of `xl -vvv create ` and the
contents of the log that you will find in
/var/log/xen/xl-.log after you have attempted a reboot?

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] x86: improve vCPU selection in pagetable_dying()

2018-09-10 Thread Jan Beulich
Rather than unconditionally using vCPU 0, use the current vCPU if the
subject domain is the current one.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -858,7 +858,7 @@ void pagetable_dying(struct domain *d, p
 
 ASSERT(paging_mode_shadow(d));
 
-v = d->vcpu[0];
+v = (d == current->domain) ? current : d->vcpu[0];
 v->arch.paging.mode->shadow.pagetable_dying(v, gpa);
 #else
 BUG();



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] x86/mm: re-indent after "re-arrange get_page_from_le() vs pv_l1tf_check_le()"

2018-09-10 Thread Jan Beulich
That earlier change introduced two "else switch ()" constructs which now
get converted back to "normal" style (indentation). To limit indentation
depth, a conditional gets inverted in ptwr_emulated_update().

No functional change intended.

Requested-by: Andrew Cooper 
Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1371,16 +1371,21 @@ static int alloc_l1_table(struct page_in
 if ( ret )
 goto out;
 }
-else switch ( ret = get_page_from_l1e(pl1e[i], d, d) )
+else
 {
-default:
-goto fail;
-case 0:
-break;
-case _PAGE_RW ... _PAGE_RW | PAGE_CACHE_ATTRS:
-ASSERT(!(ret & ~(_PAGE_RW | PAGE_CACHE_ATTRS)));
-l1e_flip_flags(pl1e[i], ret);
-break;
+switch ( ret = get_page_from_l1e(pl1e[i], d, d) )
+{
+default:
+goto fail;
+
+case 0:
+break;
+
+case _PAGE_RW ... _PAGE_RW | PAGE_CACHE_ATTRS:
+ASSERT(!(ret & ~(_PAGE_RW | PAGE_CACHE_ATTRS)));
+l1e_flip_flags(pl1e[i], ret);
+break;
+}
 }
 
 pl1e[i] = adjust_guest_l1e(pl1e[i], d);
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -136,12 +136,18 @@ static int ptwr_emulated_update(unsigned
 if ( pv_l1tf_check_l1e(d, nl1e) )
 return X86EMUL_RETRY;
 }
-else switch ( ret = get_page_from_l1e(nl1e, d, d) )
+else
 {
-default:
-if ( is_pv_32bit_domain(d) && (bytes == 4) && (unaligned_addr & 4) &&
- !p_old && (l1e_get_flags(nl1e) & _PAGE_PRESENT) )
+switch ( ret = get_page_from_l1e(nl1e, d, d) )
 {
+default:
+if ( !is_pv_32bit_domain(d) || (bytes != 4) ||
+ !(unaligned_addr & 4) || p_old ||
+ !(l1e_get_flags(nl1e) & _PAGE_PRESENT) )
+{
+gdprintk(XENLOG_WARNING, "could not get_page_from_l1e()\n");
+return X86EMUL_UNHANDLEABLE;
+}
 /*
  * If this is an upper-half write to a PAE PTE then we assume that
  * the guest has simply got the two writes the wrong way round. We
@@ -151,19 +157,16 @@ static int ptwr_emulated_update(unsigned
 gdprintk(XENLOG_DEBUG, "ptwr_emulate: fixing up invalid PAE PTE %"
  PRIpte"\n", l1e_get_intpte(nl1e));
 l1e_remove_flags(nl1e, _PAGE_PRESENT);
+break;
+
+case 0:
+break;
+
+case _PAGE_RW ... _PAGE_RW | PAGE_CACHE_ATTRS:
+ASSERT(!(ret & ~(_PAGE_RW | PAGE_CACHE_ATTRS)));
+l1e_flip_flags(nl1e, ret);
+break;
 }
-else
-{
-gdprintk(XENLOG_WARNING, "could not get_page_from_l1e()\n");
-return X86EMUL_UNHANDLEABLE;
-}
-break;
-case 0:
-break;
-case _PAGE_RW ... _PAGE_RW | PAGE_CACHE_ATTRS:
-ASSERT(!(ret & ~(_PAGE_RW | PAGE_CACHE_ATTRS)));
-l1e_flip_flags(nl1e, ret);
-break;
 }
 
 nl1e = adjust_guest_l1e(nl1e, d);





___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4] x86/HVM: don't #GP/#SS on wrapping virt->linear translations

2018-09-10 Thread Jan Beulich
Real hardware wraps silently in most cases, so we should behave the
same. Also split real and VM86 mode handling, as the latter really
ought to have limit checks applied.

Signed-off-by: Jan Beulich 
---
v4: Re-base.
v3: Restore 32-bit wrap check for AMD.
v2: Extend to non-64-bit modes. Reduce 64-bit check to a single
is_canonical_address() invocation.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2444,16 +2444,21 @@ bool_t hvm_virtual_to_linear_addr(
  */
 ASSERT(seg < x86_seg_none);
 
-if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PE) ||
- (guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
+if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PE) )
 {
 /*
- * REAL/VM86 MODE: Don't bother with segment access checks.
+ * REAL MODE: Don't bother with segment access checks.
  * Certain of them are not done in native real mode anyway.
  */
 addr = (uint32_t)(addr + reg->base);
-last_byte = (uint32_t)addr + bytes - !!bytes;
-if ( last_byte < addr )
+}
+else if ( (guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) &&
+  is_x86_user_segment(seg) )
+{
+/* VM86 MODE: Fixed 64k limits on all user segments. */
+addr = (uint32_t)(addr + reg->base);
+last_byte = (uint32_t)offset + bytes - !!bytes;
+if ( max(offset, last_byte) >> 16 )
 goto out;
 }
 else if ( hvm_long_mode_active(curr) &&
@@ -2475,8 +2480,7 @@ bool_t hvm_virtual_to_linear_addr(
 addr += reg->base;
 
 last_byte = addr + bytes - !!bytes;
-if ( !is_canonical_address(addr) || last_byte < addr ||
- !is_canonical_address(last_byte) )
+if ( !is_canonical_address((long)addr < 0 ? addr : last_byte) )
 goto out;
 }
 else
@@ -2526,8 +2530,11 @@ bool_t hvm_virtual_to_linear_addr(
 if ( (offset <= reg->limit) || (last_byte < offset) )
 goto out;
 }
-else if ( (last_byte > reg->limit) || (last_byte < offset) )
-goto out; /* last byte is beyond limit or wraps 0x */
+else if ( last_byte > reg->limit )
+goto out; /* last byte is beyond limit */
+else if ( last_byte < offset &&
+  curr->domain->arch.cpuid->x86_vendor == X86_VENDOR_AMD )
+goto out; /* access wraps */
 }
 
 /* All checks ok. */





___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] pass-through: adjust pIRQ migration

2018-09-10 Thread Jan Beulich
For one it is quite pointless to iterate over all pIRQ-s the domain has
when just one is being adjusted. Introduce hvm_migrate_pirq().

Additionally it is bogus to migrate the pIRQ to a vCPU different from
the one the event is supposed to be posted to - if anything, it might be
worth considering not to migrate the pIRQ at all in the posting case.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -462,10 +462,9 @@ void hvm_migrate_timers(struct vcpu *v)
 pt_migrate(v);
 }
 
-static int hvm_migrate_pirq(struct domain *d, struct hvm_pirq_dpci *pirq_dpci,
-void *arg)
+void hvm_migrate_pirq(struct hvm_pirq_dpci *pirq_dpci, const struct vcpu *v)
 {
-struct vcpu *v = arg;
+ASSERT(iommu_enabled && hvm_domain_irq(v->domain)->dpci);
 
 if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) &&
  /* Needn't migrate pirq if this pirq is delivered to guest directly.*/
@@ -476,11 +475,17 @@ static int hvm_migrate_pirq(struct domai
 pirq_spin_lock_irq_desc(dpci_pirq(pirq_dpci), NULL);
 
 if ( !desc )
-return 0;
+return;
 ASSERT(MSI_IRQ(desc - irq_desc));
 irq_set_affinity(desc, cpumask_of(v->processor));
 spin_unlock_irq(>lock);
 }
+}
+
+static int migrate_pirq(struct domain *d, struct hvm_pirq_dpci *pirq_dpci,
+void *arg)
+{
+hvm_migrate_pirq(pirq_dpci, arg);
 
 return 0;
 }
@@ -493,7 +498,7 @@ void hvm_migrate_pirqs(struct vcpu *v)
return;
 
 spin_lock(>event_lock);
-pt_pirq_iterate(d, hvm_migrate_pirq, v);
+pt_pirq_iterate(d, migrate_pirq, v);
 spin_unlock(>event_lock);
 }
 
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -434,8 +434,8 @@ int pt_irq_create_bind(
 if ( vcpu )
 pirq_dpci->gmsi.posted = true;
 }
-if ( dest_vcpu_id >= 0 )
-hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
+if ( vcpu && iommu_enabled )
+hvm_migrate_pirq(pirq_dpci, vcpu);
 
 /* Use interrupt posting if it is supported. */
 if ( iommu_intpost )
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -288,6 +288,7 @@ bool hvm_set_guest_bndcfgs(struct vcpu *
 bool hvm_check_cpuid_faulting(struct vcpu *v);
 void hvm_migrate_timers(struct vcpu *v);
 void hvm_do_resume(struct vcpu *v);
+void hvm_migrate_pirq(struct hvm_pirq_dpci *pirq_dpci, const struct vcpu *v);
 void hvm_migrate_pirqs(struct vcpu *v);
 
 void hvm_inject_event(const struct x86_event *event);





___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v19 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Isaila Alexandru
On Mon, 2018-09-10 at 07:42 -0600, Jan Beulich wrote:
> > > > On 10.09.18 at 15:33,  wrote:
> > 
> > On Mon, 2018-09-10 at 15:36 +0300, Alexandru Isaila wrote:
> > > This patch removes the redundant save functions and renames the
> > > save_one* to save. It then changes the domain param to vcpu in
> > > the
> > > save funcs and adapts print messages in order to match the format
> > > of
> > > the
> > > other save related messages.
> > > 
> > > Signed-off-by: Alexandru Isaila 
> > > 
> > > ---
> > > Changes since V18:
> > >   - Add const struct domain to rtc_save and hpet_save
> > >   - Latched the vCPU into a local variable in hvm_save_one()
> > >   - Add HVMSR_PER_VCPU kind check to the bounds if.
> > > ---
> > >  xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
> > >  xen/arch/x86/emul-i8254.c  |  5 ++-
> > >  xen/arch/x86/hvm/hpet.c|  7 ++--
> > >  xen/arch/x86/hvm/hvm.c | 75 +++-
> > > 
> > > --
> > >  xen/arch/x86/hvm/irq.c | 15 ---
> > >  xen/arch/x86/hvm/mtrr.c| 22 ++
> > >  xen/arch/x86/hvm/pmtimer.c |  5 ++-
> > >  xen/arch/x86/hvm/rtc.c |  5 ++-
> > >  xen/arch/x86/hvm/save.c| 28 +++--
> > >  xen/arch/x86/hvm/vioapic.c |  5 ++-
> > >  xen/arch/x86/hvm/viridian.c| 23 ++-
> > >  xen/arch/x86/hvm/vlapic.c  | 38 ++---
> > >  xen/arch/x86/hvm/vpic.c|  5 ++-
> > >  xen/include/asm-x86/hvm/save.h |  8 +---
> > >  14 files changed, 63 insertions(+), 196 deletions(-)
> > > 
> > > @@ -141,6 +138,8 @@ int hvm_save_one(struct domain *d, unsigned
> > > int
> > > typecode, unsigned int instance,
> > >  int rv;
> > >  hvm_domain_context_t ctxt = { };
> > >  const struct hvm_save_descriptor *desc;
> > > +struct vcpu *v = (hvm_sr_handlers[typecode].kind ==
> > > HVMSR_PER_VCPU) ?
> > > + d->vcpu[instance] : d->vcpu[0];
> > >  
> > 
> > Sorry for the inconvenience but I've just realized that this has to
> > be
> > initialize after the bounds check. I will have this in mine
> 
> Also to eliminate redundancy I'd prefer if you moved the conditional
> expression inside the square brackets.
> 
Are these changes worth waiting 24h?

Alex

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen: remove unnecessary condition check before kfree

2018-09-10 Thread zhong jiang
On 2018/9/10 17:52, Juergen Gross wrote:
> On 08/09/18 16:18, zhong jiang wrote:
>> kfree has taken null pointer into account. So just remove the
>> condition check before kfree.
>>
>> Signed-off-by: zhong jiang 
>> ---
>>  drivers/xen/xen-acpi-processor.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/xen/xen-acpi-processor.c 
>> b/drivers/xen/xen-acpi-processor.c
>> index fbb9137..7e1d49e 100644
>> --- a/drivers/xen/xen-acpi-processor.c
>> +++ b/drivers/xen/xen-acpi-processor.c
>> @@ -268,7 +268,7 @@ static int push_pxx_to_hypervisor(struct acpi_processor 
>> *_pr)
>>  pr_warn("(_PXX): Hypervisor error (%d) for ACPI CPU%u\n",
>>  ret, _pr->acpi_id);
>>  err_free:
>> -if (!IS_ERR_OR_NULL(dst_states))
>> +if (!IS_ERR(dst_states))
> This is just a change of the condition, not a removal.
>
> I don't think change is worth it.
>
 Fine, I just consider the duplication of function.  Of course. make sense you 
have said.
 Maybe it will more clear to have the judgement.

 Thanks,
 zhong jiang
> Juergen
>
>



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v19 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Jan Beulich
>>> On 10.09.18 at 15:33,  wrote:
> On Mon, 2018-09-10 at 15:36 +0300, Alexandru Isaila wrote:
>> This patch removes the redundant save functions and renames the
>> save_one* to save. It then changes the domain param to vcpu in the
>> save funcs and adapts print messages in order to match the format of
>> the
>> other save related messages.
>> 
>> Signed-off-by: Alexandru Isaila 
>> 
>> ---
>> Changes since V18:
>>  - Add const struct domain to rtc_save and hpet_save
>>  - Latched the vCPU into a local variable in hvm_save_one()
>>  - Add HVMSR_PER_VCPU kind check to the bounds if.
>> ---
>>  xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
>>  xen/arch/x86/emul-i8254.c  |  5 ++-
>>  xen/arch/x86/hvm/hpet.c|  7 ++--
>>  xen/arch/x86/hvm/hvm.c | 75 +++-
>> --
>>  xen/arch/x86/hvm/irq.c | 15 ---
>>  xen/arch/x86/hvm/mtrr.c| 22 ++
>>  xen/arch/x86/hvm/pmtimer.c |  5 ++-
>>  xen/arch/x86/hvm/rtc.c |  5 ++-
>>  xen/arch/x86/hvm/save.c| 28 +++--
>>  xen/arch/x86/hvm/vioapic.c |  5 ++-
>>  xen/arch/x86/hvm/viridian.c| 23 ++-
>>  xen/arch/x86/hvm/vlapic.c  | 38 ++---
>>  xen/arch/x86/hvm/vpic.c|  5 ++-
>>  xen/include/asm-x86/hvm/save.h |  8 +---
>>  14 files changed, 63 insertions(+), 196 deletions(-)
>> 
>> @@ -141,6 +138,8 @@ int hvm_save_one(struct domain *d, unsigned int
>> typecode, unsigned int instance,
>>  int rv;
>>  hvm_domain_context_t ctxt = { };
>>  const struct hvm_save_descriptor *desc;
>> +struct vcpu *v = (hvm_sr_handlers[typecode].kind ==
>> HVMSR_PER_VCPU) ?
>> + d->vcpu[instance] : d->vcpu[0];
>>  
> Sorry for the inconvenience but I've just realized that this has to be
> initialize after the bounds check. I will have this in mine

Also to eliminate redundancy I'd prefer if you moved the conditional
expression inside the square brackets.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v19 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-10 Thread Jan Beulich
>>> On 10.09.18 at 15:29,  wrote:
> On Mon, 2018-09-10 at 07:25 -0600, Jan Beulich wrote:
>> > > > On 10.09.18 at 14:36,  wrote:
>> > 
>> > --- a/xen/arch/x86/hvm/save.c
>> > +++ b/xen/arch/x86/hvm/save.c
>> > @@ -155,6 +155,11 @@ int hvm_save_one(struct domain *d, unsigned
>> > int typecode, unsigned int instance,
>> >  if ( !ctxt.data )
>> >  return -ENOMEM;
>> >  
>> > +if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
>> > +vcpu_pause(d->vcpu[instance]);
>> 
>> Is there any reason why you don't use v here and ...
> 
> There is not reason but I did not want to modify the reviewed patch
> further

But you should (and if in doubt drop the previously supplied tags),
so that the correlation of the above with ...

>> > +else
>> > +domain_pause(d);
>> > +
>> >  if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )

... the actual call becomes as clear as possible. My R-b does not stand
without this adjustment, but it does with it in place. _Provided_ it is
correct, and provided the one remaining patch is now also correct, I'd
be happy to make the change while committing.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v19 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Isaila Alexandru
On Mon, 2018-09-10 at 15:36 +0300, Alexandru Isaila wrote:
> This patch removes the redundant save functions and renames the
> save_one* to save. It then changes the domain param to vcpu in the
> save funcs and adapts print messages in order to match the format of
> the
> other save related messages.
> 
> Signed-off-by: Alexandru Isaila 
> 
> ---
> Changes since V18:
>   - Add const struct domain to rtc_save and hpet_save
>   - Latched the vCPU into a local variable in hvm_save_one()
>   - Add HVMSR_PER_VCPU kind check to the bounds if.
> ---
>  xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
>  xen/arch/x86/emul-i8254.c  |  5 ++-
>  xen/arch/x86/hvm/hpet.c|  7 ++--
>  xen/arch/x86/hvm/hvm.c | 75 +++-
> --
>  xen/arch/x86/hvm/irq.c | 15 ---
>  xen/arch/x86/hvm/mtrr.c| 22 ++
>  xen/arch/x86/hvm/pmtimer.c |  5 ++-
>  xen/arch/x86/hvm/rtc.c |  5 ++-
>  xen/arch/x86/hvm/save.c| 28 +++--
>  xen/arch/x86/hvm/vioapic.c |  5 ++-
>  xen/arch/x86/hvm/viridian.c| 23 ++-
>  xen/arch/x86/hvm/vlapic.c  | 38 ++---
>  xen/arch/x86/hvm/vpic.c|  5 ++-
>  xen/include/asm-x86/hvm/save.h |  8 +---
>  14 files changed, 63 insertions(+), 196 deletions(-)
> 
> @@ -141,6 +138,8 @@ int hvm_save_one(struct domain *d, unsigned int
> typecode, unsigned int instance,
>  int rv;
>  hvm_domain_context_t ctxt = { };
>  const struct hvm_save_descriptor *desc;
> +struct vcpu *v = (hvm_sr_handlers[typecode].kind ==
> HVMSR_PER_VCPU) ?
> + d->vcpu[instance] : d->vcpu[0];
>  
Sorry for the inconvenience but I've just realized that this has to be
initialize after the bounds check. I will have this in mine

Thanks, 
Alex
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v19 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-10 Thread Isaila Alexandru
On Mon, 2018-09-10 at 07:25 -0600, Jan Beulich wrote:
> > > > On 10.09.18 at 14:36,  wrote:
> > 
> > --- a/xen/arch/x86/hvm/save.c
> > +++ b/xen/arch/x86/hvm/save.c
> > @@ -155,6 +155,11 @@ int hvm_save_one(struct domain *d, unsigned
> > int typecode, unsigned int instance,
> >  if ( !ctxt.data )
> >  return -ENOMEM;
> >  
> > +if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> > +vcpu_pause(d->vcpu[instance]);
> 
> Is there any reason why you don't use v here and ...

There is not reason but I did not want to modify the reviewed patch
further

Alex
> > +else
> > +domain_pause(d);
> > +
> >  if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
> >  printk(XENLOG_G_ERR "HVM%d save: failed to save type
> > %"PRIu16" (%d)\n",
> > d->domain_id, typecode, rv);
> > @@ -186,6 +191,11 @@ int hvm_save_one(struct domain *d, unsigned
> > int typecode, unsigned int instance,
> >  }
> >  }
> >  
> > +if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> > +vcpu_unpause(d->vcpu[instance]);
> 
> ... here?
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] xen: add DEBUG_INFO Kconfig symbol

2018-09-10 Thread Jan Beulich
>>> On 10.09.18 at 15:21,  wrote:
 On 31.08.18 at 10:43,  wrote:
> On 31.08.18 at 10:29,  wrote:
>>> --- a/xen/Kconfig.debug
>>> +++ b/xen/Kconfig.debug
>>> @@ -11,6 +11,13 @@ config DEBUG
>>>  
>>>   You probably want to say 'N' here.
>>>  
>>> +config DEBUG_INFO
>>> +   bool "Compile Xen with debug info"
>>> +   default y
>>> +   ---help---
>>> + If you say Y here the resulting Xen will include debugging info
>>> + resulting in a larger binary image.
>>> +
>>>  if DEBUG || EXPERT = "y"
>> 
>> Perhaps better move your addition into this conditional section?
> 
> So this was a bad suggestion after all - with DEBUG=n DEBUG_INFO is
> now implicitly n as well. The section needs to be moved back to where
> you had it as per above, with the _prompt_ depending on
> DEBUG || EXPERT="y".

Furthermore - is COVERAGE without DEBUG_INFO of any use? Are there
perhaps any other dependencies (I think/hope live patching logic doesn't
depend on debug info)?

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v19 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-10 Thread Jan Beulich
>>> On 10.09.18 at 14:36,  wrote:
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -155,6 +155,11 @@ int hvm_save_one(struct domain *d, unsigned int 
> typecode, unsigned int instance,
>  if ( !ctxt.data )
>  return -ENOMEM;
>  
> +if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> +vcpu_pause(d->vcpu[instance]);

Is there any reason why you don't use v here and ...

> +else
> +domain_pause(d);
> +
>  if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
>  printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" 
> (%d)\n",
> d->domain_id, typecode, rv);
> @@ -186,6 +191,11 @@ int hvm_save_one(struct domain *d, unsigned int 
> typecode, unsigned int instance,
>  }
>  }
>  
> +if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> +vcpu_unpause(d->vcpu[instance]);

... here?

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [DRBD-user] [PATCH] xen-blkback: Switch to closed state after releasing the backing device

2018-09-10 Thread Valentin Vidic
On Mon, Sep 10, 2018 at 02:45:31PM +0200, Lars Ellenberg wrote:
> On Sat, Sep 08, 2018 at 09:34:32AM +0200, Valentin Vidic wrote:
> > On Fri, Sep 07, 2018 at 07:14:59PM +0200, Valentin Vidic wrote:
> > > In fact the first one is the original code path before I modified
> > > blkback.  The problem is it gets executed async from workqueue so
> > > it might not always run before the call to drbdadm secondary.
> > 
> > As the DRBD device gets released only when the last IO request
> > has finished, I found a way to check and wait for this in the
> > block-drbd script:
> 
> > --- block-drbd.orig 2018-09-08 09:07:23.499648515 +0200
> > +++ block-drbd  2018-09-08 09:28:12.892193649 +0200
> > @@ -230,6 +230,24 @@
> >  and so cannot be mounted ${m2}${when}."
> >  }
> >  
> > +wait_for_inflight()
> > +{
> > +  local dev="$1"
> > +  local inflight="/sys/block/${dev#/dev/}/inflight"
> > +  local rd wr
> > +
> > +  if ! [ -f "$inflight" ]; then
> > +return
> > +  fi
> > +
> > +  while true; do
> > +read rd wr < $inflight
> > +if [ "$rd" = "0" -a "$wr" = "0" ]; then
> 
> If it is "idle" now, but still "open",
> this will not sleep, and still fail the demotion below.

True, but in this case blkback is holding it open until all
the writes have finished and the last write closes the device.
Since fuser can't check blkback this is an approximation that
seems to work because I don't get any failed drbdadm calls now.

> You try to help it by "waiting forever until it appears to be idle".
> I suggest to at least limit the retries by iteration or time.
> And also (or, instead; but you'd potentially get a number of
> "scary messages" in the logs) add something like:

Ok, should I open a PR to discuss this change further?

> Or, well, yes, fix blkback to not "defer" the final close "too long",
> if at all possible.

blkback needs to finish the writes on shutdown or I get a fsck errors
on next boot. Ideally XenbusStateClosed should be delayed until the
device release but currently it does not seem possible without breaking
other things.

-- 
Valentin

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 0/1] cameraif: Add ABI for para-virtualized

2018-09-10 Thread Laurent Pinchart
Hi Oleksandr,

Thank you for the patch.

On Tuesday, 31 July 2018 12:31:41 EEST Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
> 
> Hello!
> 
> At the moment Xen [1] already supports some virtual multimedia
> features [2] such as virtual display, sound. It supports keyboards,
> pointers and multi-touch devices all allowing Xen to be used in
> automotive appliances, In-Vehicle Infotainment (IVI) systems
> and many more.
> 
> This work adds a new Xen para-virtualized protocol for a virtual
> camera device which extends multimedia capabilities of Xen even
> farther: video conferencing, IVI, high definition maps etc.
> 
> The initial goal is to support most needed functionality with the
> final idea to make it possible to extend the protocol if need be:
> 
> 1. Provide means for base virtual device configuration:
>  - pixel formats
>  - resolutions
>  - frame rates
> 2. Support basic camera controls:
>  - contrast
>  - brightness
>  - hue
>  - saturation
> 3. Support streaming control
> 4. Support zero-copying use-cases
> 
> I hope that Xen and V4L and other communities could give their
> valuable feedback on this work, so I can update the protocol
> to better fit any additional requirements I might have missed.

I'll start with a question : what are the expected use cases ? The ones listed 
above sound like they would better be solved by passing the corresponding 
device(s) to the guest.

> [1] https://www.xenproject.org/
> [2] https://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=xen/include/public/io
> 
> Oleksandr Andrushchenko (1):
>   cameraif: add ABI for para-virtual camera
> 
>  xen/include/public/io/cameraif.h | 981 +++
>  1 file changed, 981 insertions(+)
>  create mode 100644 xen/include/public/io/cameraif.h

-- 
Regards,

Laurent Pinchart




___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] xen: add DEBUG_INFO Kconfig symbol

2018-09-10 Thread Jan Beulich
>>> On 31.08.18 at 10:43,  wrote:
 On 31.08.18 at 10:29,  wrote:
>> --- a/xen/Kconfig.debug
>> +++ b/xen/Kconfig.debug
>> @@ -11,6 +11,13 @@ config DEBUG
>>  
>>You probably want to say 'N' here.
>>  
>> +config DEBUG_INFO
>> +bool "Compile Xen with debug info"
>> +default y
>> +---help---
>> +  If you say Y here the resulting Xen will include debugging info
>> +  resulting in a larger binary image.
>> +
>>  if DEBUG || EXPERT = "y"
> 
> Perhaps better move your addition into this conditional section?

So this was a bad suggestion after all - with DEBUG=n DEBUG_INFO is
now implicitly n as well. The section needs to be moved back to where
you had it as per above, with the _prompt_ depending on
DEBUG || EXPERT="y".

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v6 12/16] x86/xen: Add Hygon Dhyana support to Xen

2018-09-10 Thread Pu Wen
To make Xen works functionally on Hygon platforms, reuse AMD's Xen
support code path for Hygon Dhyana CPU.

There are six core performance events counters per thread, so there are
six MSRs for these counters(0-5). Also there are four legacy PMC MSRs,
they are alias of the counters(0-3).

In this version of kernel Hygon use the lagacy and safe versions of MSR
access. It works fine when VPMU enabled in Xen on Hygon platforms by
testing with perf.

Reviewed-by: Boris Ostrovsky 
Signed-off-by: Pu Wen 
---
 arch/x86/xen/pmu.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
index 7d00d4a..9403854 100644
--- a/arch/x86/xen/pmu.c
+++ b/arch/x86/xen/pmu.c
@@ -90,6 +90,12 @@ static void xen_pmu_arch_init(void)
k7_counters_mirrored = 0;
break;
}
+   } else if (boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+   amd_num_counters = F10H_NUM_COUNTERS;
+   amd_counters_base = MSR_K7_PERFCTR0;
+   amd_ctrls_base = MSR_K7_EVNTSEL0;
+   amd_msr_step = 1;
+   k7_counters_mirrored = 0;
} else {
uint32_t eax, ebx, ecx, edx;
 
@@ -285,7 +291,7 @@ static bool xen_amd_pmu_emulate(unsigned int msr, u64 *val, 
bool is_read)
 
 bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err)
 {
-   if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
+   if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
if (is_amd_pmu_msr(msr)) {
if (!xen_amd_pmu_emulate(msr, val, 1))
*val = native_read_msr_safe(msr, err);
@@ -308,7 +314,7 @@ bool pmu_msr_write(unsigned int msr, uint32_t low, uint32_t 
high, int *err)
 {
uint64_t val = ((uint64_t)high << 32) | low;
 
-   if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
+   if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
if (is_amd_pmu_msr(msr)) {
if (!xen_amd_pmu_emulate(msr, , 0))
*err = native_write_msr_safe(msr, low, high);
@@ -379,7 +385,7 @@ static unsigned long long xen_intel_read_pmc(int counter)
 
 unsigned long long xen_read_pmc(int counter)
 {
-   if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
+   if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
return xen_amd_read_pmc(counter);
else
return xen_intel_read_pmc(counter);
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/1] cameraif: add ABI for para-virtual camera

2018-09-10 Thread Oleksandr Andrushchenko

On 09/10/2018 03:26 PM, Hans Verkuil wrote:

On 09/10/2018 01:49 PM, Oleksandr Andrushchenko wrote:

On 09/10/2018 02:09 PM, Hans Verkuil wrote:

On 09/10/2018 11:52 AM, Oleksandr Andrushchenko wrote:

On 09/10/2018 12:04 PM, Hans Verkuil wrote:

On 09/10/2018 10:24 AM, Oleksandr Andrushchenko wrote:

On 09/10/2018 10:53 AM, Hans Verkuil wrote:

Hi Oleksandr,

On 09/10/2018 09:16 AM, Oleksandr Andrushchenko wrote:




I suspect that you likely will want to support such sources eventually, so
it pays to design this with that in mind.

Again, I think that this is the backend to hide these
use-cases from the frontend.

I'm not sure you can: say you are playing a bluray connected to the system
with HDMI, then if there is a resolution change, what do you do? You can tear
everything down and build it up again, or you can just tell frontends that
something changed and that they have to look at the new vcamera configuration.

The latter seems to be more sensible to me. It is really not much that you
need to do: all you really need is an event signalling that something changed.
In V4L2 that's the V4L2_EVENT_SOURCE_CHANGE.

well, this complicates things a lot as I'll have to
re-allocate buffers - right?

Right. Different resolutions means different sized buffers and usually lots of
changes throughout the whole video pipeline, which in this case can even
go into multiple VMs.

One additional thing to keep in mind for the future: V4L2_EVENT_SOURCE_CHANGE
has a flags field that tells userspace what changed. Right now that is just the
resolution, but in the future you can expect flags for cases where just the
colorspace information changes, but not the resolution.

Which reminds me of two important missing pieces of information in your 
protocol:

1) You need to communicate the colorspace data:

- colorspace
- xfer_func
- ycbcr_enc/hsv_enc (unlikely you ever want to support HSV pixelformats, so I
 think you can ignore hsv_enc)
- quantization

See 
https://hverkuil.home.xs4all.nl/spec/uapi/v4l/pixfmt-v4l2.html#c.v4l2_pix_format
and the links to the colorspace sections in the V4L2 spec for details).

This information is part of the format, it is reported by the driver.

I'll take a look and think what can be put and how into the protocol,
do you think I'll have to implement all the above for
this stage?

Yes. Without it VMs will have no way of knowing how to reproduce the right 
colors.
They don't *have* to use this information, but it should be there. For cameras
this isn't all that important, for SDTV/HDTV sources this becomes more relevant
(esp. the quantization and ycbcr_enc information) and for sources with 
BT.2020/HDR
formats this is critical.

ok, then I'll add the following to the set_config request/response:

  uint32_t colorspace;
  uint32_t xfer_func;
  uint32_t ycbcr_enc;
  uint32_t quantization;

With this respect, I will need to put some OS agnostic constants
into the protocol, so if backend and frontend are not Linux/V4L2
based they can still talk to each other.
I see that V4L2 already defines constants for the above: [1], [2], [3], [4].

Do you think I can define the same replacing V4L2_ prefix
with XENCAMERA_, e.g. V4L2_XFER_FUNC_SRGB -> XENCAMERA_XFER_FUNC_SRGB?

Yes.


Do I need to define all those or there can be some subset of the
above for my simpler use-case?

Most of these defines directly map to standards. I would skip the following
defines:

V4L2_COLORSPACE_DEFAULT (not applicable)
V4L2_COLORSPACE_470_SYSTEM_*  (rarely used, if received by the HW the Xen 
backend
should map this to V4L2_COLORSPACE_SMPTE170M)
V4L2_COLORSPACE_JPEG (historical V4L2 artifact, see here how to map:
 
https://hverkuil.home.xs4all.nl/spec/uapi/v4l/colorspaces-details.html#col-jpeg)

V4L2_COLORSPACE_SMPTE240M (rarely used, map to V4L2_COLORSPACE_SMPTE170M if 
seen in backend)

V4L2_XFER_FUNC_SMPTE240M (rarely used, map to V4L2_XFER_FUNC_709)

V4L2_YCBCR_ENC_SMPTE240M (rarely used, map to V4L2_YCBCR_ENC_709)

While V4L2 allows 0 (DEFAULT) values for xfer_func, ycbcr_enc and quantization, 
and
provides macros to map default values to the actual values (for legacy reasons),
the Xen backend should always fill this in explicitly, using those same mapping
macros (see e.g. V4L2_MAP_XFER_FUNC_DEFAULT).

The V4L2 spec has extensive information on colorspaces (sections 2.14-2.17).


Thank you for such a detailed explanation!
I'll define the constants as agreed above.


The vivid driver can actually reproduce all combinations, so that's a good 
driver
to test this with.

You mean I can use it on backend side instead of real HW camera and
test all the configurations possible/those of interest?

Right.

Regards,

Hans

It seems that the number of changes discussed are begging
for the v2 of the protocol to be published ;)

Thank you,
Oleksandr

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org

[Xen-devel] [PATCH v6 00/16] Add support for Hygon Dhyana Family 18h processor

2018-09-10 Thread Pu Wen
As a new x86 CPU Vendor, Chengdu Haiguang IC Design Co., Ltd (Hygon)
is a Joint Venture between AMD and Haiguang Information Technology Co.,
Ltd., and aims at providing high performance x86 processor for China
server market.

The first generation Hygon's processor(Dhyana) originates from AMD
technology and shares most of the architecture with AMD's family 17h,
but with different CPU Vendor ID("HygonGenuine")/PCIE Device Vendor ID
(0x1D94)/Family series number (Family 18h).

To enable the support of Linux kernel to Hygon's CPU, we added a new
vendor type (X86_VENDOR_HYGON, with value of 9) in arch/x86/include/
asm/processor.h, and shared most of kernel support codes with AMD
family 17h.

As Hygon will negotiate with AMD to make sure that only Hygon will
use family 18h, so try to minimize code modification and share most
codes with AMD under this consideration.

This patch series have been applied and tested successfully on Hygon
Dhyana SoC silicon. Also tested on AMD EPYC (Family 17h) processor,
it works fine and makes no harm to the existing codes.


v5->v6:
  - Rebased on 4.19-rc3 and tested against it.
  - Add Reviewed-by from Borislav Petkov for cacheinfo, smpboot,
alternative and kvm.
  - Rework the patch subjects and patch descriptions.
  - Rework vendor checking for some patches to minimize the code
modification.

v4->v5:
  - Rebased on 4.19-rc1 and tested against it.
  - Add Reviewed-by from Boris Ostrovsky for Xen.
  - Rework EDAC patch without vendor checking for minimal modification.

v3->v4:
  - Rebased on 4.18.3 and tested against it.
  - Merge patchs 05/17 perfctr and 10/17 events in v3 to patch 05/16
PMU for better patch function group.
  - Add hygon_get_topology_early() in patch 01/16.
  - Rework vendor checking and refine coding style.
  - Add Acked-by from Bjorn Helgaas for pci.
  - Add Acked-by from Rafael J. Wysocki for cpufreq and acpi.

v2->v3:
  - Rebased on 4.18-rc8 and tested against it.
  - Rework vendor checking codes to improve consistency.

v1->v2:
  - Rebased on 4.18-rc6 and tested against it.
  - Split the patchset to small series of patches.
  - Rework patch descriptions.
  - Create a separated arch/x86/kernel/cpu/hygon.c for Dhyana CPU
initialization to reduce long-term maintenance effort.


Pu Wen (16):
  x86/cpu: Create Hygon Dhyana architecture support file
  x86/cpu: Get cache info and setup cache cpumap for Hygon Dhyana
  x86/cpu/mtrr: Support TOP_MEM2 and get MTRR number
  x86/smpboot: SMP init nodelay and not flush caches before sleep
  perf/x86: Add Hygon Dhyana support to PMU infrastructure
  x86/alternative: Init ideal_nops for Hygon Dhyana
  x86/pci: Add Hygon Dhyana support to PCI and north bridge
  x86/apic: Add Hygon Dhyana support to APIC
  x86/bugs: Add mitigation to spectre and no meltdown for Hygon Dhyana
  x86/mce: Add Hygon Dhyana support to MCE infrastructure
  x86/kvm: Add Hygon Dhyana support to KVM infrastructure
  x86/xen: Add Hygon Dhyana support to Xen
  ACPI, x86: Add Hygon Dhyana support
  cpufreq, x86: Add Hygon Dhyana support
  EDAC, amd64: Add Hygon Dhyana support
  cpupower, x86: Add Hygon Dhyana support

 MAINTAINERS|   6 +
 arch/x86/Kconfig.cpu   |  14 +
 arch/x86/events/amd/core.c |   4 +
 arch/x86/events/amd/uncore.c   |  20 +-
 arch/x86/events/core.c |   4 +
 arch/x86/include/asm/cacheinfo.h   |   1 +
 arch/x86/include/asm/kvm_emulate.h |   4 +
 arch/x86/include/asm/mce.h |   2 +
 arch/x86/include/asm/processor.h   |   3 +-
 arch/x86/include/asm/virtext.h |   5 +-
 arch/x86/kernel/alternative.c  |   4 +
 arch/x86/kernel/amd_nb.c   |  47 ++-
 arch/x86/kernel/apic/apic.c|  13 +-
 arch/x86/kernel/apic/probe_32.c|   1 +
 arch/x86/kernel/cpu/Makefile   |   1 +
 arch/x86/kernel/cpu/bugs.c |   6 +-
 arch/x86/kernel/cpu/cacheinfo.c|  31 +-
 arch/x86/kernel/cpu/common.c   |   1 +
 arch/x86/kernel/cpu/cpu.h  |   1 +
 arch/x86/kernel/cpu/hygon.c| 411 +
 arch/x86/kernel/cpu/mcheck/mce-severity.c  |   3 +-
 arch/x86/kernel/cpu/mcheck/mce.c   |  20 +-
 arch/x86/kernel/cpu/mtrr/cleanup.c |   3 +-
 arch/x86/kernel/cpu/mtrr/mtrr.c|   2 +-
 arch/x86/kernel/cpu/perfctr-watchdog.c |   2 +
 arch/x86/kernel/smpboot.c  |   4 +-
 arch/x86/kvm/emulate.c |  11 +-
 arch/x86/pci/amd_bus.c |   6 +-
 arch/x86/xen/pmu.c |  12 +-
 drivers/acpi/acpi_pad.c|   1 +
 

[Xen-devel] [libvirt test] 127467: regressions - FAIL

2018-09-10 Thread osstest service owner
flight 127467 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/127467/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 123814
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 123814
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 123814
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 123814

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  d5a5cbb532f9d5c8a1ee2d127158c11a15dec971
baseline version:
 libvirt  076a2b409667dd9f716a2a2085e1ffea9d58fe8b

Last test of basis   123814  2018-06-05 04:19:23 Z   97 days
Failing since123840  2018-06-06 04:19:28 Z   96 days   78 attempts
Testing same since   127401  2018-09-08 04:22:31 Z2 days3 attempts


People who touched revisions under test:
Ales Musil 
  Andrea Bolognani 
  Anya Harter 
  Bing Niu 
  Bjoern Walk 
  Bobo Du 
  Boris Fiuczynski 
  Brijesh Singh 
  Changkuo Shi 
  Chen Hanxiao 
  Christian Ehrhardt 
  Clementine Hayat 
  Cole Robinson 
  Dan Kenigsberg 
  Daniel Nicoletti 
  Daniel P. Berrangé 
  Daniel Veillard 
  Eric Blake 
  Erik Skultety 
  Fabiano Fidêncio 
  Farhan Ali 
  Filip Alac 
  Han Han 
  intrigeri 
  intrigeri 
  Jamie Strandboge 
  Jie Wang 
  Jim Fehlig 
  Jiri Denemark 
  John Ferlan 
  Julio Faracco 
  Ján Tomko 
  Kashyap Chamarthy 
  Katerina Koukiou 
  Laine Stump 
  Laszlo Ersek 
  Lubomir Rintel 
  Luyao Huang 
  Marc Hartmayer 
  Marc Hartmayer 
  Marcos Paulo de Souza 
  Marek Marczykowski-Górecki 
  Martin Kletzander 
  Matthias Bolte 
  Michal Privoznik 
  Michal Prívozník 
  Nikolay Shirokovskiy 
  Pavel Hrdina 
  Peter Krempa 
  Pino Toscano 
  Radostin Stoyanov 
  Ramy Elkest 
  ramyelkest 
  Richard W.M. Jones 
  Roman Bogorodskiy 
  Roman Bolshakov 
  Shi Lei 
  Shi Lei 
  Shichangkuo 
  Shivaprasad G Bhat 
  Simon Kobyda 
  Stefan Bader 
  Stefan Berger 
  Sukrit Bhatnagar 
  Tomáš Golembiovský 
  Vitaly Kuznetsov 
  w00251574 
  Wang Huaqiang 
  Weilun Zhu 
  xinhua.Cao 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-arm64-libvirt  fail
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-libvirt-xsm blocked 
 test-arm64-arm64-libvirt-xsm blocked 
 test-amd64-i386-libvirt-xsm  blocked 
 test-amd64-amd64-libvirt blocked 
 test-arm64-arm64-libvirt blocked 
 test-armhf-armhf-libvirt blocked 
 test-amd64-i386-libvirt   

Re: [Xen-devel] [PATCH 0/1] cameraif: Add ABI for para-virtualized

2018-09-10 Thread Oleksandr Andrushchenko

Hi, Laurent!

On 09/10/2018 03:48 PM, Laurent Pinchart wrote:

Hi Oleksandr,

Thank you for the patch.

On Tuesday, 31 July 2018 12:31:41 EEST Oleksandr Andrushchenko wrote:

From: Oleksandr Andrushchenko 

Hello!

At the moment Xen [1] already supports some virtual multimedia
features [2] such as virtual display, sound. It supports keyboards,
pointers and multi-touch devices all allowing Xen to be used in
automotive appliances, In-Vehicle Infotainment (IVI) systems
and many more.

This work adds a new Xen para-virtualized protocol for a virtual
camera device which extends multimedia capabilities of Xen even
farther: video conferencing, IVI, high definition maps etc.

The initial goal is to support most needed functionality with the
final idea to make it possible to extend the protocol if need be:

1. Provide means for base virtual device configuration:
  - pixel formats
  - resolutions
  - frame rates
2. Support basic camera controls:
  - contrast
  - brightness
  - hue
  - saturation
3. Support streaming control
4. Support zero-copying use-cases

I hope that Xen and V4L and other communities could give their
valuable feedback on this work, so I can update the protocol
to better fit any additional requirements I might have missed.

I'll start with a question : what are the expected use cases ?

The very basic use-case is to share a capture stream produced
by a single HW camera to multiple VMs for different
purposes: In-Vehicle Infotainment, high definition maps etc.
all running in different (dedicated) VMs at the same time

  The ones listed
above sound like they would better be solved by passing the corresponding
device(s) to the guest.

With the above use-case I cannot tell how passing the
corresponding *single* device can serve *multiple* VMs.
Could you please elaborate more on the solution you see?



[1] https://www.xenproject.org/
[2] https://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=xen/include/public/io

Oleksandr Andrushchenko (1):
   cameraif: add ABI for para-virtual camera

  xen/include/public/io/cameraif.h | 981 +++
  1 file changed, 981 insertions(+)
  create mode 100644 xen/include/public/io/cameraif.h

Thank you,
Oleksandr

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 08/13] optee: add support for RPC SHM buffers

2018-09-10 Thread Julien Grall

Hi Volodymyr,

On 03/09/18 17:54, Volodymyr Babchuk wrote:

OP-TEE usually uses the same idea with command buffers (see
previous commit) to issue RPC requests. Problem is that initially
it has no buffer, where it can write request. So the first RPC
request it makes is special: it requests NW to allocate shared
buffer for other RPC requests. Usually this buffer is allocated
only once for every OP-TEE thread and it remains allocated all
the time until shutdown.

Mediator needs to pin this buffer(s) to make sure that domain can't
transfer it to someone else. Also it should be mapped into XEN
address space, because mediator needs to check responses from
guests.


Can you explain why you always need to keep the shared buffer mapped in 
Xen? Why not using access_guest_memory_by_ipa every time you want to get 
information from the guest?




Signed-off-by: Volodymyr Babchuk 
---
  xen/arch/arm/tee/optee.c | 121 ++-
  1 file changed, 119 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 1008eba..6d6b51d 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -21,6 +21,7 @@
  #include 
  
  #define MAX_STD_CALLS   16

+#define MAX_RPC_SHMS16
  
  /*

   * Call context. OP-TEE can issue multiple RPC returns during one call.
@@ -35,11 +36,22 @@ struct std_call_ctx {
  int rpc_op;
  };
  
+/* Pre-allocated SHM buffer for RPC commands */

+struct shm_rpc {
+struct list_head list;
+struct optee_msg_arg *guest_arg;
+struct page *guest_page;
+mfn_t guest_mfn;
+uint64_t cookie;
+};
+
  struct domain_ctx {
  struct list_head list;
  struct list_head call_ctx_list;
+struct list_head shm_rpc_list;
  struct domain *domain;
  atomic_t call_ctx_count;
+atomic_t shm_rpc_count;
  spinlock_t lock;
  };
  
@@ -145,8 +157,10 @@ static int optee_enable(struct domain *d)
  
  ctx->domain = d;

  INIT_LIST_HEAD(>call_ctx_list);
+INIT_LIST_HEAD(>shm_rpc_list);
  
  atomic_set(>call_ctx_count, 0);

+atomic_set(>shm_rpc_count, 0);
  spin_lock_init(>lock);
  
  spin_lock(_ctx_list_lock);

@@ -256,11 +270,81 @@ static struct std_call_ctx *find_call_ctx(struct 
domain_ctx *ctx, int thread_id)
  return NULL;
  }
  
+static struct shm_rpc *allocate_and_map_shm_rpc(struct domain_ctx *ctx, paddr_t gaddr,


I would prefer if you pass a gfn instead of the address here.


+uint64_t cookie)


NIT: Indentation


+{
+struct shm_rpc *shm_rpc;
+int count;
+
+count = atomic_add_unless(>shm_rpc_count, 1, MAX_RPC_SHMS);
+if ( count == MAX_RPC_SHMS )
+return NULL;
+
+shm_rpc = xzalloc(struct shm_rpc);
+if ( !shm_rpc )
+goto err;
+
+shm_rpc->guest_mfn = lookup_and_pin_guest_ram_addr(gaddr, NULL);
+
+if ( mfn_eq(shm_rpc->guest_mfn, INVALID_MFN) )
+goto err;
+
+shm_rpc->guest_arg = map_domain_page_global(shm_rpc->guest_mfn);
+if ( !shm_rpc->guest_arg )
+{
+gprintk(XENLOG_INFO, "Could not map domain page\n");


You don't unpin the guest page if Xen can't map the page.


+goto err;
+}
+shm_rpc->cookie = cookie;
+
+spin_lock(>lock);
+list_add_tail(_rpc->list, >shm_rpc_list);
+spin_unlock(>lock);
+
+return shm_rpc;
+
+err:
+atomic_dec(>shm_rpc_count);
+xfree(shm_rpc);
+return NULL;
+}
+
+static void free_shm_rpc(struct domain_ctx *ctx, uint64_t cookie)
+{
+struct shm_rpc *shm_rpc;
+bool found = false;
+
+spin_lock(>lock);
+
+list_for_each_entry( shm_rpc, >shm_rpc_list, list )
+{
+if ( shm_rpc->cookie == cookie )


What does guarantee you the cookie will be uniq?


+{
+found = true;
+list_del(_rpc->list);
+break;
+}
+}
+spin_unlock(>lock);


At this point you have a shm_rpc in hand to free. But what does 
guarantee you no-one will use it?



+
+if ( !found ) {
+return;
+}


No need for the {} in a one-liner.


+
+if ( shm_rpc->guest_arg ) {


Coding style:

if ( ... )
{


+unpin_guest_ram_addr(shm_rpc->guest_mfn);
+unmap_domain_page_global(shm_rpc->guest_arg);
+}
+
+xfree(shm_rpc);
+}
+
  static void optee_domain_destroy(struct domain *d)
  {
  struct arm_smccc_res resp;
  struct domain_ctx *ctx;
  struct std_call_ctx *call, *call_tmp;
+struct shm_rpc *shm_rpc, *shm_rpc_tmp;
  bool found = false;
  
  /* At this time all domain VCPUs should be stopped */

@@ -290,7 +374,11 @@ static void optee_domain_destroy(struct domain *d)
  list_for_each_entry_safe( call, call_tmp, >call_ctx_list, list )
  free_std_call_ctx(ctx, call);
  
+list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, >shm_rpc_list, list )

+free_shm_rpc(ctx, shm_rpc->cookie);
+
  ASSERT(!atomic_read(>call_ctx_count));
+ASSERT(!atomic_read(>shm_rpc_count));
  
  

[Xen-devel] v4.19-rc3, wrong pageflags in dom0

2018-09-10 Thread Olaf Hering
After reboot I tried to start my HVM domU, this is what I get in dom0:



Welcome to SUSE Linux Enterprise Server 12 SP2  (x86_64) - Kernel 
4.19.321-default-bug1106594 (hvc0).


stein-schneider login: (XEN) HVM1 save: CPU
(XEN) HVM1 save: PIC
(XEN) HVM1 save: IOAPIC
(XEN) HVM1 save: LAPIC
(XEN) HVM1 save: LAPIC_REGS
(XEN) HVM1 save: PCI_IRQ
(XEN) HVM1 save: ISA_IRQ
(XEN) HVM1 save: PCI_LINK
(XEN) HVM1 save: PIT
(XEN) HVM1 save: RTC
(XEN) HVM1 save: HPET
(XEN) HVM1 save: PMTIMER
(XEN) HVM1 save: MTRR
(XEN) HVM1 save: VIRIDIAN_DOMAIN
(XEN) HVM1 save: CPU_XSAVE
(XEN) HVM1 save: VIRIDIAN_VCPU
(XEN) HVM1 save: VMCE_VCPU
(XEN) HVM1 save: TSC_ADJUST
(XEN) HVM1 restore: CPU 0
(d1) HVM Loader
(d1) Detected Xen v4.7.6_04-43.39
(d1) Xenbus rings @0xfeffc000, event channel 1
(d1) System requested SeaBIOS
(d1) CPU speed is 2667 MHz
(d1) Relocating guest memory for lowmem MMIO space disabled
(d1) PCI-ISA link 0 routed to IRQ5
(d1) PCI-ISA link 1 routed to IRQ10
(d1) PCI-ISA link 2 routed to IRQ11
(d1) PCI-ISA link 3 routed to IRQ5
(d1) pci dev 01:3 INTA->IRQ10
(d1) pci dev 02:0 INTA->IRQ11
(d1) pci dev 04:0 INTA->IRQ5
(d1) No RAM in high memory; setting high_mem resource base to 1
(d1) pci dev 03:0 bar 10 size 00200: 0f008
(d1) pci dev 02:0 bar 14 size 00100: 0f208
(d1) pci dev 04:0 bar 30 size 4: 0f300
(d1) pci dev 03:0 bar 30 size 1: 0f304
(d1) pci dev 03:0 bar 14 size 01000: 0f305
(d1) pci dev 02:0 bar 10 size 00100: 0c001
(d1) pci dev 04:0 bar 10 size 00100: 0c101
(d1) pci dev 04:0 bar 14 size 00100: 0f3051000
(d1) pci dev 01:1 bar 20 size 00010: 0c201
(d1) Multiprocessor initialisation:
(d1)  - CPU0 ... 40-bit phys ... fixed MTRRs ... var MTRRs [1/8] ... done.
(d1)  - CPU1 ... 40-bit phys ... fixed MTRRs ... var MTRRs [1/8] ... done.
(d1)  - CPU2 ... 40-bit phys ... fixed MTRRs ... var MTRRs [1/8] ... done.
(d1)  - CPU3 ... 40-bit phys ... fixed MTRRs ... var MTRRs [1/8] ... done.
(d1) Writing SMBIOS tables ...
(d1) Loading SeaBIOS ...
(d1) Creating MP tables ...
(d1) Loading ACPI ...
(d1) vm86 TSS at fc00a200
(d1) BIOS map:
(d1)  1-100e3: Scratch space
(d1)  c-f: Main BIOS
(d1) E820 table:
(d1)  [00]: : - :000a: RAM
(d1)  HOLE: :000a - :000c
(d1)  [01]: :000c - :0010: RESERVED
(d1)  [02]: :0010 - :3700: RAM
(d1)  HOLE: :3700 - :fc00
(d1)  [03]: :fc00 - 0001:: RESERVED
(d1) Invoking SeaBIOS ...
(d1) SeaBIOS (version rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org)
(d1) BUILD: gcc: (SUSE Linux) 4.8.5 binutils: (GNU Binutils; SUSE Linux 
Enterprise 1
(d1) 2) 2.29.1
(d1)
(d1) Found Xen hypervisor signature at 4000
(d1) Running on QEMU (i440fx)
(d1) xen: copy e820...
(d1) Relocating init from 0x000dc280 to 0x36fad700 (size 75888)
(d1) Found 7 PCI devices (max PCI bus is 00)
(d1) Allocated Xen hypercall page at 36fff000
(d1) Detected Xen v4.7.6_04-43.39
(d1) xen: copy BIOS tables...
(d1) Copying SMBIOS entry point from 0x00010020 to 0x000f6d20
(d1) Copying MPTABLE from 0xfc0011c0/fc0011d0 to 0x000f6c00
(d1) Copying PIR from 0x00010040 to 0x000f6b80
(d1) Copying ACPI RSDP from 0x000100c0 to 0x000f6b50
(d1) Using pmtimer, ioport 0xb008
(d1) Scan for VGA option rom
(d1) Running option rom at c000:0003
(d1) pmm call arg1=0
(d1) Turning on vga text mode console
(d1) SeaBIOS (version rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org)
(d1) Machine UUID 53e79f11-89b1-4905-af9e-97185830c046
(d1) All threads complete.
(d1) Found 0 lpt ports
(d1) Found 1 serial ports
(d1) ATA controller 1 at 1f0/3f4/0 (irq 14 dev 9)
(d1) ATA controller 2 at 170/374/0 (irq 15 dev 9)
(d1) ata0-0: QEMU HARDDISK ATA-7 Hard-Disk (3072 MiBytes)
(d1) Searching bootorder for: /pci@i0cf8/*@1,1/drive@0/disk@0
(d1) PS2 keyboard initialized
(d1) All threads complete.
(d1) Scan for option roms
(d1) Running option rom at c980:0003
(d1) pmm call arg1=1
(d1) pmm call arg1=0
(d1) pmm call arg1=1
(d1) pmm call arg1=0
(d1) Searching bootorder for: /pci@i0cf8/*@4
(d1)
(d1) Press ESC for boot menu.
(d1)
(d1) Searching bootorder for: HALT
(d1) drive 0x000f6ae0: PCHS=6241/16/63 translation=large LCHS=780/128/63 
s=6291456
(d1) Space available for UMB: ca800-ee000, f6540-f6ae0
(d1) Returned 258048 bytes of ZoneHigh
(d1) e820 map has 6 items:
(d1)   0:  - 0009fc00 = 1 RAM
(d1)   1: 0009fc00 - 000a = 2 RESERVED
(d1)   2: 000f - 0010 = 2 RESERVED
(d1)   3: 0010 - 36fff000 = 1 RAM
(d1)   4: 36fff000 - 3700 = 2 RESERVED
(d1)   5: fc00 - 0001 = 2 RESERVED
(d1) enter handle_19:
(d1)   NULL
(d1) Booting from Hard Disk...
(d1) Booting from :7c00
(XEN) d1v0 Triple fault - invoking HVM shutdown action 1
(XEN) *** Dumping Dom1 vcpu#0 state: ***
(XEN) [ Xen-4.7.6_04-43.39  x86_64  debug=n  Not tainted 

Re: [Xen-devel] [DRBD-user] [PATCH] xen-blkback: Switch to closed state after releasing the backing device

2018-09-10 Thread Lars Ellenberg
On Sat, Sep 08, 2018 at 09:34:32AM +0200, Valentin Vidic wrote:
> On Fri, Sep 07, 2018 at 07:14:59PM +0200, Valentin Vidic wrote:
> > In fact the first one is the original code path before I modified
> > blkback.  The problem is it gets executed async from workqueue so
> > it might not always run before the call to drbdadm secondary.
> 
> As the DRBD device gets released only when the last IO request
> has finished, I found a way to check and wait for this in the
> block-drbd script:

> --- block-drbd.orig 2018-09-08 09:07:23.499648515 +0200
> +++ block-drbd  2018-09-08 09:28:12.892193649 +0200
> @@ -230,6 +230,24 @@
>  and so cannot be mounted ${m2}${when}."
>  }
>  
> +wait_for_inflight()
> +{
> +  local dev="$1"
> +  local inflight="/sys/block/${dev#/dev/}/inflight"
> +  local rd wr
> +
> +  if ! [ -f "$inflight" ]; then
> +return
> +  fi
> +
> +  while true; do
> +read rd wr < $inflight
> +if [ "$rd" = "0" -a "$wr" = "0" ]; then

If it is "idle" now, but still "open",
this will not sleep, and still fail the demotion below.

> +  return
> +fi
> +sleep 1
> +  done
> +}
>  
>  t=$(xenstore_read_default "$XENBUS_PATH/type" 'MISSING')
>  
> @@ -285,6 +303,8 @@
>  drbd_lrole="${drbd_role%%/*}"
>  drbd_dev="$(drbdadm sh-dev $drbd_resource)"
>  
> +wait_for_inflight $drbd_dev
> +
>  if [ "$drbd_lrole" != 'Secondary' ]; then
>drbdadm secondary $drbd_resource

You try to help it by "waiting forever until it appears to be idle".
I suggest to at least limit the retries by iteration or time.
And also (or, instead; but you'd potentially get a number of
"scary messages" in the logs) add something like:
  for i in 1 2 3 5 7 x; do
drbdadm secondary $drbd_resource && exit 0
if [ $i = x ]; then
  # ... "appears to still be in use, maybe by" ...
  fuser -v $drbd_dev
  exit 1
# else ... "will retry in $i seconds" ...
fi
sleep $i
  done
...

Or, well, yes, fix blkback to not "defer" the final close "too long",
if at all possible.

Lars

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs and adapts print messages in order to match the format of the
other save related messages.

Signed-off-by: Alexandru Isaila 

---
Changes since V18:
- Add const struct domain to rtc_save and hpet_save
- Latched the vCPU into a local variable in hvm_save_one()
- Add HVMSR_PER_VCPU kind check to the bounds if.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
 xen/arch/x86/emul-i8254.c  |  5 ++-
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 75 +++---
 xen/arch/x86/hvm/irq.c | 15 ---
 xen/arch/x86/hvm/mtrr.c| 22 ++
 xen/arch/x86/hvm/pmtimer.c |  5 ++-
 xen/arch/x86/hvm/rtc.c |  5 ++-
 xen/arch/x86/hvm/save.c| 28 +++--
 xen/arch/x86/hvm/vioapic.c |  5 ++-
 xen/arch/x86/hvm/viridian.c| 23 ++-
 xen/arch/x86/hvm/vlapic.c  | 38 ++---
 xen/arch/x86/hvm/vpic.c|  5 ++-
 xen/include/asm-x86/hvm/save.h |  8 +---
 14 files changed, 63 insertions(+), 196 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 71afc06f9a..f15835e9f6 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -350,7 +350,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 }
 
 #if CONFIG_HVM
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -362,21 +362,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -397,7 +382,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index a85dfcccbc..73be4188ad 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -391,8 +391,9 @@ void pit_stop_channel0_irq(PITState *pit)
 spin_unlock(>lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 PITState *pit = domain_vpit(d);
 int rc;
 
@@ -438,7 +439,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 4d8f6da2d9..be371ecc0b 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -570,16 +570,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+const struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -695,7 +696,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 58c03bed15..43145586c5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,7 +731,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm.msr_tsc_adjust,
@@ -740,21 +740,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, 

[Xen-devel] [PATCH v19 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index a23d0876c4..2df0127a46 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1030,24 +1030,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-10 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V15:
- Moved pause/unpause calls into hvm_save_one()
- Re-add the loop in hvm_save_one().
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 10 ++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 797841e803..2284128e93 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -599,12 +599,10 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index c7e2ecdb9f..403c84da73 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -155,6 +155,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( !ctxt.data )
 return -ENOMEM;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_pause(d->vcpu[instance]);
+else
+domain_pause(d);
+
 if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -186,6 +191,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 }
 }
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_unpause(d->vcpu[instance]);
+else
+domain_unpause(d);
+
 xfree(ctxt.data);
 return rv;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1013b6ecc4..1669957f1c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1339,69 +1339,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila i
Reviewed-by: Jan Beulich 

---
Changes since v16:
- Address style comments.
---
 xen/arch/x86/hvm/mtrr.c | 80 ++---
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index de1b5c4614..f3dd972b4a 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -690,52 +690,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+ sizeof(mtrr_state->fixed_ranges));
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+   sizeof(hw_mtrr.msr_mtrr_fixed));
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V11:
- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 302e13a14d..c2b2b6623c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -350,6 +350,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 }
 
 #if CONFIG_HVM
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_vmce_vcpu ctxt = {
+.caps = v->arch.vmce.mcg_cap,
+.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+};
+
+return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
@@ -357,14 +369,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-struct hvm_vmce_vcpu ctxt = {
-.caps = v->arch.vmce.mcg_cap,
-.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-};
-
-err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+err = vmce_save_vcpu_ctxt_one(v, h);
 if ( err )
 break;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] v4.19-rc3, bug in __gnttab_unmap_refs_async with HVM domU

2018-09-10 Thread Olaf Hering
While preparing another variant of the fix for the bug in disable_hotplug_cpu, 
this crash happend for me while starting my HVM domU a second time. dom0 runs 
Xen 4.7.6.
I guess it crashed while it did shutdown the domU running a xenlinux based 
kernel.

Olaf

[ 8114.320383] BUG: unable to handle kernel NULL pointer dereference at 
0008
[ 8114.320416] PGD 1fd6a1f067 P4D 1fd6a1f067 PUD 1fd4b4a067 PMD 0
[ 8114.320427] Oops:  [#1] PREEMPT SMP NOPTI
[ 8114.320435] CPU: 0 PID: 828 Comm: xenstored Tainted: GE 
4.19.321-default-bug1106594 #5
[ 8114.320444] Hardware name: HP ProLiant SL160z G6 /ProLiant SL160z G6 , BIOS 
O33 07/28/2009
[ 8114.320458] RIP: e030:__gnttab_unmap_refs_async+0x29/0x90
[ 8114.320464] Code: 00 66 66 66 66 90 53 8b 8f 80 00 00 00 31 c0 48 89 fb 48 
8b 57 78 85 c9 75 09 eb 49 83 c0 01 39 c8 74 42 4c 63 c0 4e 8b 04 c2 <4d> 8b 48 
08 41 f6 c1 01 75 4d 45 8b 40 34
 41 83 f8 01 7e de 8b 83
[ 8114.320480] RSP: e02b:c900471d3bd8 EFLAGS: 00010297
[ 8114.320487] RAX: 0001 RBX: c900471d3c20 RCX: 006c
[ 8114.320495] RDX: 881fd9f3eac0 RSI: 810ad2f0 RDI: c900471d3c20
[ 8114.320503] RBP: 02ccbdb0 R08:  R09: dead0100
[ 8114.320511] R10: 1093 R11: 881fd3340840 R12: 880101609d80
[ 8114.320518] R13: 006c R14: 881fd68dbb01 R15: 880101609d80
[ 8114.320533] FS:  7fd3352a3880() GS:881fdf40() 
knlGS:
[ 8114.320541] CS:  e033 DS:  ES:  CR0: 80050033
[ 8114.320548] CR2: 0008 CR3: 001fd33ca000 CR4: 2660
[ 8114.320560] Call Trace:
[ 8114.320569]  gnttab_unmap_refs_sync+0x40/0x60
[ 8114.320580]  __unmap_grant_pages+0x80/0x140 [xen_gntdev]
[ 8114.320587]  ? gnttab_unmap_refs_sync+0x60/0x60
[ 8114.320596]  ? __queue_work+0x3f0/0x3f0
[ 8114.320602]  ? gnttab_free_pages+0x20/0x20
[ 8114.320610]  unmap_grant_pages+0x80/0xe0 [xen_gntdev]
[ 8114.320618]  unmap_if_in_range+0x53/0xa0 [xen_gntdev]
[ 8114.320626]  mn_invl_range_start+0x4a/0xe0 [xen_gntdev]
[ 8114.320635]  __mmu_notifier_invalidate_range_start+0x6b/0xe0
[ 8114.320646]  unmap_vmas+0x71/0x90
[ 8114.320652]  unmap_region+0x9c/0xf0
[ 8114.320660]  ? __vma_rb_erase+0x109/0x200
[ 8114.320666]  do_munmap+0x213/0x390
[ 8114.320673]  __x64_sys_brk+0x13c/0x1b0
[ 8114.320682]  do_syscall_64+0x5d/0x110
[ 8114.320690]  entry_SYSCALL_64_after_hwframe+0x49/0xbe



pgp4kkcZ0Af6E.pgp
Description: Digitale Signatur von OpenPGP
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c198c9190a..b0cf3a836f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,16 +731,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 10/13] x86/hvm: Add handler for save_one funcs

2018-09-10 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to 
hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/emul-i8254.c  | 2 +-
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 7 +--
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 8 
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 31 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index c2b2b6623c..71afc06f9a 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -397,6 +397,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index 7f1ded2623..a85dfcccbc 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -438,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index cbd1efbc9f..4d8f6da2d9 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -695,7 +695,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1669957f1c..58c03bed15 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -776,6 +776,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1156,8 +1157,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
-  1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt, 1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
save_area) + \
@@ -1508,6 +1509,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1520,6 +1522,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index fe2c2fa06c..9502bae645 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -773,9 +773,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index f3dd972b4a..2ddf5074cb 

[Xen-devel] [PATCH v19 09/13] x86/hvm: Introduce lapic_save_regs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++---
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 31c7a66d01..8b2955365f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1422,26 +1422,30 @@ static int lapic_save_hidden(struct domain *d, 
hvm_domain_context_t *h)
 return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+if ( hvm_funcs.sync_pir_to_irr )
+hvm_funcs.sync_pir_to_irr(v);
+
+return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-if ( hvm_funcs.sync_pir_to_irr )
-hvm_funcs.sync_pir_to_irr(v);
-
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+err = lapic_save_regs_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 /*
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 11/13] x86/domctl: Use hvm_save_vcpu_handler

2018-09-10 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V17:
- Remove double ;
- Move struct vcpu *v to reduce scope
- Remove stray lines.
---
 xen/arch/x86/hvm/save.c | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 870042b27f..e059ab4e13 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
 unsigned int i;
 
 if ( d->is_dying )
@@ -223,8 +222,27 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+hvm_save_handler handler = hvm_sr_handlers[i].save;
+
+if ( save_one_handler )
+{
+struct vcpu *v;
+
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+if ( save_one_handler(v, h) != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -ENODATA;
+}
+}
+}
+else if ( handler )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
@@ -233,7 +251,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
d->domain_id, i);
-return -EFAULT;
+return -ENODATA;
 }
 }
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b0cf3a836f..e1133f64d7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -778,119 +778,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct segment_register seg;
+struct hvm_hw_cpu ctxt = {
+.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm.sync_tsc),
+.msr_tsc_aux = hvm_msr_tsc_aux(v),
+.rax = v->arch.user_regs.rax,
+.rbx = v->arch.user_regs.rbx,
+.rcx = v->arch.user_regs.rcx,
+.rdx = v->arch.user_regs.rdx,
+.rbp = v->arch.user_regs.rbp,
+.rsi = v->arch.user_regs.rsi,
+.rdi = v->arch.user_regs.rdi,
+.rsp = v->arch.user_regs.rsp,
+.rip = v->arch.user_regs.rip,
+.rflags = v->arch.user_regs.rflags,
+.r8  = v->arch.user_regs.r8,
+.r9  = v->arch.user_regs.r9,
+.r10 = v->arch.user_regs.r10,
+.r11 = v->arch.user_regs.r11,
+.r12 = v->arch.user_regs.r12,
+.r13 = v->arch.user_regs.r13,
+.r14 = v->arch.user_regs.r14,
+.r15 = v->arch.user_regs.r15,
+.dr0 = v->arch.debugreg[0],
+.dr1 = v->arch.debugreg[1],
+.dr2 = v->arch.debugreg[2],
+.dr3 = v->arch.debugreg[3],
+.dr6 = v->arch.debugreg[6],
+.dr7 = v->arch.debugreg[7],
+};
+
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
+{
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
+
+return hvm_save_entry(CPU, v->vcpu_id, h, );
+}
+
 static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_hw_cpu ctxt;
-struct segment_register seg;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
-if ( v->pause_flags & VPF_down )
-continue;
-
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-ctxt.idtr_limit = seg.limit;
-ctxt.idtr_base = seg.base;
-
-hvm_get_segment_register(v, 

[Xen-devel] [PATCH v19 00/13] x86/domctl: Save info for one vcpu instance

2018-09-10 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

NOTE: Tested with tools/misc/xen-hvmctx, tools/xentrace/xenctx, xl save/restore,
custom hvm_getcontext/partial code and debug the getcontext part for guest boot.

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 08/13] x86/hvm: Introduce lapic_save_hidden_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 04702e96c9..31c7a66d01 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1399,23 +1399,27 @@ static void lapic_rearm(struct vlapic *s)
 s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+return hvm_save_entry(LAPIC, v->vcpu_id, h, _vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, >hw)) != 0 )
+err = lapic_save_hidden_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 ++
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e1133f64d7..1013b6ecc4 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1163,35 +1163,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
-for_each_vcpu ( d, v )
-{
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
 
-if ( !xsave_enabled(v) )
-continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
-}
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
 
 return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
+}
+
+return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/1] cameraif: add ABI for para-virtual camera

2018-09-10 Thread Hans Verkuil
On 09/10/2018 01:49 PM, Oleksandr Andrushchenko wrote:
> On 09/10/2018 02:09 PM, Hans Verkuil wrote:
>> On 09/10/2018 11:52 AM, Oleksandr Andrushchenko wrote:
>>> On 09/10/2018 12:04 PM, Hans Verkuil wrote:
 On 09/10/2018 10:24 AM, Oleksandr Andrushchenko wrote:
> On 09/10/2018 10:53 AM, Hans Verkuil wrote:
>> Hi Oleksandr,
>>
>> On 09/10/2018 09:16 AM, Oleksandr Andrushchenko wrote:
 

 I suspect that you likely will want to support such sources 
 eventually, so
 it pays to design this with that in mind.
>>> Again, I think that this is the backend to hide these
>>> use-cases from the frontend.
>> I'm not sure you can: say you are playing a bluray connected to the 
>> system
>> with HDMI, then if there is a resolution change, what do you do? You can 
>> tear
>> everything down and build it up again, or you can just tell frontends 
>> that
>> something changed and that they have to look at the new vcamera 
>> configuration.
>>
>> The latter seems to be more sensible to me. It is really not much that 
>> you
>> need to do: all you really need is an event signalling that something 
>> changed.
>> In V4L2 that's the V4L2_EVENT_SOURCE_CHANGE.
> well, this complicates things a lot as I'll have to
> re-allocate buffers - right?
 Right. Different resolutions means different sized buffers and usually 
 lots of
 changes throughout the whole video pipeline, which in this case can even
 go into multiple VMs.

 One additional thing to keep in mind for the future: 
 V4L2_EVENT_SOURCE_CHANGE
 has a flags field that tells userspace what changed. Right now that is 
 just the
 resolution, but in the future you can expect flags for cases where just the
 colorspace information changes, but not the resolution.

 Which reminds me of two important missing pieces of information in your 
 protocol:

 1) You need to communicate the colorspace data:

 - colorspace
 - xfer_func
 - ycbcr_enc/hsv_enc (unlikely you ever want to support HSV pixelformats, 
 so I
 think you can ignore hsv_enc)
 - quantization

 See 
 https://hverkuil.home.xs4all.nl/spec/uapi/v4l/pixfmt-v4l2.html#c.v4l2_pix_format
 and the links to the colorspace sections in the V4L2 spec for details).

 This information is part of the format, it is reported by the driver.
>>> I'll take a look and think what can be put and how into the protocol,
>>> do you think I'll have to implement all the above for
>>> this stage?
>> Yes. Without it VMs will have no way of knowing how to reproduce the right 
>> colors.
>> They don't *have* to use this information, but it should be there. For 
>> cameras
>> this isn't all that important, for SDTV/HDTV sources this becomes more 
>> relevant
>> (esp. the quantization and ycbcr_enc information) and for sources with 
>> BT.2020/HDR
>> formats this is critical.
> ok, then I'll add the following to the set_config request/response:
> 
>  uint32_t colorspace;
>  uint32_t xfer_func;
>  uint32_t ycbcr_enc;
>  uint32_t quantization;
> 
> With this respect, I will need to put some OS agnostic constants
> into the protocol, so if backend and frontend are not Linux/V4L2
> based they can still talk to each other.
> I see that V4L2 already defines constants for the above: [1], [2], [3], [4].
> 
> Do you think I can define the same replacing V4L2_ prefix
> with XENCAMERA_, e.g. V4L2_XFER_FUNC_SRGB -> XENCAMERA_XFER_FUNC_SRGB?

Yes.

> 
> Do I need to define all those or there can be some subset of the
> above for my simpler use-case?

Most of these defines directly map to standards. I would skip the following
defines:

V4L2_COLORSPACE_DEFAULT (not applicable)
V4L2_COLORSPACE_470_SYSTEM_*  (rarely used, if received by the HW the Xen 
backend
should map this to V4L2_COLORSPACE_SMPTE170M)
V4L2_COLORSPACE_JPEG (historical V4L2 artifact, see here how to map:
 
https://hverkuil.home.xs4all.nl/spec/uapi/v4l/colorspaces-details.html#col-jpeg)

V4L2_COLORSPACE_SMPTE240M (rarely used, map to V4L2_COLORSPACE_SMPTE170M if 
seen in backend)

V4L2_XFER_FUNC_SMPTE240M (rarely used, map to V4L2_XFER_FUNC_709)

V4L2_YCBCR_ENC_SMPTE240M (rarely used, map to V4L2_YCBCR_ENC_709)

While V4L2 allows 0 (DEFAULT) values for xfer_func, ycbcr_enc and quantization, 
and
provides macros to map default values to the actual values (for legacy reasons),
the Xen backend should always fill this in explicitly, using those same mapping
macros (see e.g. V4L2_MAP_XFER_FUNC_DEFAULT).

The V4L2 spec has extensive information on colorspaces (sections 2.14-2.17).

> 
>> The vivid driver can actually reproduce all combinations, so that's a good 
>> driver
>> to test this with.
> You mean I can use it on backend side instead of real HW camera and
> test all the configurations possible/those of interest?


Re: [Xen-devel] [PATCH 1/1] cameraif: add ABI for para-virtual camera

2018-09-10 Thread Oleksandr Andrushchenko

On 09/10/2018 02:09 PM, Hans Verkuil wrote:

On 09/10/2018 11:52 AM, Oleksandr Andrushchenko wrote:

On 09/10/2018 12:04 PM, Hans Verkuil wrote:

On 09/10/2018 10:24 AM, Oleksandr Andrushchenko wrote:

On 09/10/2018 10:53 AM, Hans Verkuil wrote:

Hi Oleksandr,

On 09/10/2018 09:16 AM, Oleksandr Andrushchenko wrote:




I suspect that you likely will want to support such sources eventually, so
it pays to design this with that in mind.

Again, I think that this is the backend to hide these
use-cases from the frontend.

I'm not sure you can: say you are playing a bluray connected to the system
with HDMI, then if there is a resolution change, what do you do? You can tear
everything down and build it up again, or you can just tell frontends that
something changed and that they have to look at the new vcamera configuration.

The latter seems to be more sensible to me. It is really not much that you
need to do: all you really need is an event signalling that something changed.
In V4L2 that's the V4L2_EVENT_SOURCE_CHANGE.

well, this complicates things a lot as I'll have to
re-allocate buffers - right?

Right. Different resolutions means different sized buffers and usually lots of
changes throughout the whole video pipeline, which in this case can even
go into multiple VMs.

One additional thing to keep in mind for the future: V4L2_EVENT_SOURCE_CHANGE
has a flags field that tells userspace what changed. Right now that is just the
resolution, but in the future you can expect flags for cases where just the
colorspace information changes, but not the resolution.

Which reminds me of two important missing pieces of information in your 
protocol:

1) You need to communicate the colorspace data:

- colorspace
- xfer_func
- ycbcr_enc/hsv_enc (unlikely you ever want to support HSV pixelformats, so I
think you can ignore hsv_enc)
- quantization

See 
https://hverkuil.home.xs4all.nl/spec/uapi/v4l/pixfmt-v4l2.html#c.v4l2_pix_format
and the links to the colorspace sections in the V4L2 spec for details).

This information is part of the format, it is reported by the driver.

I'll take a look and think what can be put and how into the protocol,
do you think I'll have to implement all the above for
this stage?

Yes. Without it VMs will have no way of knowing how to reproduce the right 
colors.
They don't *have* to use this information, but it should be there. For cameras
this isn't all that important, for SDTV/HDTV sources this becomes more relevant
(esp. the quantization and ycbcr_enc information) and for sources with 
BT.2020/HDR
formats this is critical.

ok, then I'll add the following to the set_config request/response:

    uint32_t colorspace;
    uint32_t xfer_func;
    uint32_t ycbcr_enc;
    uint32_t quantization;

With this respect, I will need to put some OS agnostic constants
into the protocol, so if backend and frontend are not Linux/V4L2
based they can still talk to each other.
I see that V4L2 already defines constants for the above: [1], [2], [3], [4].

Do you think I can define the same replacing V4L2_ prefix
with XENCAMERA_, e.g. V4L2_XFER_FUNC_SRGB -> XENCAMERA_XFER_FUNC_SRGB?

Do I need to define all those or there can be some subset of the
above for my simpler use-case?


The vivid driver can actually reproduce all combinations, so that's a good 
driver
to test this with.

You mean I can use it on backend side instead of real HW camera and
test all the configurations possible/those of interest?

2) If you support interlaced formats and V4L2_FIELD_ALTERNATE (i.e.
 each buffer contains a single field), then you need to be able to tell
 userspace whether the dequeued buffer contains a top or bottom field.

I think at the first stage we can assume that interlaced
formats are not supported and add such support later if need be.

Frankly I consider that a smart move :-) Interlaced formats are awful...

You just have to keep this in mind if you ever have to add support for this.

Agreed



Also, what to do with dropped frames/fields: V4L2 has a sequence counter and
timestamp that can help detecting that. You probably need something similar.

Ok, this can be reported as part of XENCAMERA_EVT_FRAME_AVAIL event

But anyways, I can add
#define XENCAMERA_EVT_CFG_CHANGE   0x01
in the protocol, so we can address this use-case




1. set format command:
 * pixel_format - uint32_t, pixel format to be used, FOURCC code.
 * width - uint32_t, width in pixels.
 * height - uint32_t, height in pixels.

2. Set frame rate command:
 + * frame_rate_numer - uint32_t, numerator of the frame rate.
 + * frame_rate_denom - uint32_t, denominator of the frame rate.

3. Set/request num bufs:
 * num_bufs - uint8_t, desired number of buffers to be used.

I like this much better. 1+2 could be combined, but 3 should definitely remain
separate.

ok, then 1+2 combined + 3 separate.
Do you think we can still name 1+2 as "set_format" or "set_config"
will fit better?

set_format is closer to S_FMT as used in 

Re: [Xen-devel] [PATCH] xentrace: handle sparse cpu ids correctly in xen trace buffer handling

2018-09-10 Thread Jan Beulich
>>> On 10.09.18 at 13:34,  wrote:
> On 30/08/18 11:28, Juergen Gross wrote:
>> On 30/08/18 10:26, Jan Beulich wrote:
>> On 30.08.18 at 09:52,  wrote:
 @@ -202,7 +202,7 @@ static int alloc_trace_bufs(unsigned int pages)
   * Allocate buffers for all of the cpus.
   * If any fails, deallocate what you have so far and exit.
   */
 -for_each_online_cpu(cpu)
 +for_each_present_cpu(cpu)
  {
  offset = t_info_first_offset + (cpu * pages);
  t_info->mfn_offset[cpu] = offset;
>>>
>>> Doesn't this go a little too far? Why would you allocate buffers for CPUs
>>> which can never be brought online? There ought to be a middle ground,
>>> where online-able CPUs have buffers allocated, but non-online-able ones
>>> won't. On larger systems I guess the difference may be quite noticable.
>> 
>> According to the comments in include/xen/cpumask.h cpu_present_map
>> represents the populated cpus.
>> 
>> I know that currently there is no support for onlining a parked cpu
>> again, but I think having to think about Xentrace buffer allocation in
>> case onlining of parked cpus is added would be a nearly 100% chance to
>> introduce a bug.
>> 
>> Xentrace is used for testing purposes only. So IMHO allocating some more
>> memory is acceptable.
> 
> Are you fine with my reasoning or do you still want me to avoid buffer
> allocation for offline cpus?

I don't object to it, but I'm also not overly happy. IOW - I'd like to leave
it to George as the maintainer of the code (who in turn might leave it to
you).

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/ARM+sched: Don't opencode %pv in printk()'s

2018-09-10 Thread Julien Grall

Hi,

On 30/08/18 13:50, Andrew Cooper wrote:

No functional change.

Signed-off-by: Andrew Cooper 


I have committed the patch.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xentrace: handle sparse cpu ids correctly in xen trace buffer handling

2018-09-10 Thread Juergen Gross
On 30/08/18 11:28, Juergen Gross wrote:
> On 30/08/18 10:26, Jan Beulich wrote:
> On 30.08.18 at 09:52,  wrote:
>>> @@ -202,7 +202,7 @@ static int alloc_trace_bufs(unsigned int pages)
>>>   * Allocate buffers for all of the cpus.
>>>   * If any fails, deallocate what you have so far and exit.
>>>   */
>>> -for_each_online_cpu(cpu)
>>> +for_each_present_cpu(cpu)
>>>  {
>>>  offset = t_info_first_offset + (cpu * pages);
>>>  t_info->mfn_offset[cpu] = offset;
>>
>> Doesn't this go a little too far? Why would you allocate buffers for CPUs
>> which can never be brought online? There ought to be a middle ground,
>> where online-able CPUs have buffers allocated, but non-online-able ones
>> won't. On larger systems I guess the difference may be quite noticable.
> 
> According to the comments in include/xen/cpumask.h cpu_present_map
> represents the populated cpus.
> 
> I know that currently there is no support for onlining a parked cpu
> again, but I think having to think about Xentrace buffer allocation in
> case onlining of parked cpus is added would be a nearly 100% chance to
> introduce a bug.
> 
> Xentrace is used for testing purposes only. So IMHO allocating some more
> memory is acceptable.

Are you fine with my reasoning or do you still want me to avoid buffer
allocation for offline cpus?


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v8 6/6] x86/iommu: add map-reserved dom0-iommu option to map reserved memory ranges

2018-09-10 Thread Julien Grall

Hi Roger,

On 07/09/18 10:07, Roger Pau Monne wrote:

Several people have reported hardware issues (malfunctioning USB
controllers) due to iommu page faults on Intel hardware. Those faults
are caused by missing RMRR (VTd) entries in the ACPI tables. Those can
be worked around on VTd hardware by manually adding RMRR entries on
the command line, this is however limited to Intel hardware and quite
cumbersome to do.

In order to solve those issues add a new dom0-iommu=map-reserved
option that identity maps all regions marked as reserved in the memory
map. Note that regions used by devices emulated by Xen (LAPIC, IO-APIC
or PCIe MCFG regions) are specifically avoided. Note that this option
is available to all Dom0 modes (as opposed to the inclusive option
which only works for PV Dom0).

Signed-off-by: Roger Pau Monné 
Reviewed-by: Kevin Tian 
Reviewed-by: Wei Liu 
Acked-by: Jan Beulich 


For Arm bits:

Acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3] xen/balloon: add runtime control for scrubbing ballooned out pages

2018-09-10 Thread Juergen Gross
On 07/09/18 18:49, Marek Marczykowski-Górecki wrote:
> Scrubbing pages on initial balloon down can take some time, especially
> in nested virtualization case (nested EPT is slow). When HVM/PVH guest is
> started with memory= significantly lower than maxmem=, all the extra
> pages will be scrubbed before returning to Xen. But since most of them
> weren't used at all at that point, Xen needs to populate them first
> (from populate-on-demand pool). In nested virt case (Xen inside KVM)
> this slows down the guest boot by 15-30s with just 1.5GB needed to be
> returned to Xen.
> 
> Add runtime parameter to enable/disable it, to allow initially disabling
> scrubbing, then enable it back during boot (for example in initramfs).
> Such usage relies on assumption that a) most pages ballooned out during
> initial boot weren't used at all, and b) even if they were, very few
> secrets are in the guest at that time (before any serious userspace
> kicks in).
> Convert CONFIG_XEN_SCRUB_PAGES to CONFIG_XEN_SCRUB_PAGES_DEFAULT (also
> enabled by default), controlling default value for the new runtime
> switch.
> 
> Signed-off-by: Marek Marczykowski-Górecki 

Reviewed-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  1   2   >