[Xen-devel] [qemu-mainline test] 146840: regressions - FAIL

2020-02-10 Thread osstest service owner
flight 146840 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146840/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  

[Xen-devel] [linux-5.4 test] 146833: regressions - FAIL

2020-02-10 Thread osstest service owner
flight 146833 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146833/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
146121
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 146121
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail REGR. vs. 
146121

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat  fail pass in 146760

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10   fail REGR. vs. 146121

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass

version targeted for testing:
 linux58c72057f662cee4ec2aaab9be1abeced884814a
baseline version:
 

[Xen-devel] [ovmf test] 146834: regressions - FAIL

2020-02-10 Thread osstest service owner
flight 146834 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146834/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
145767
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767

version targeted for testing:
 ovmf 4b026f0d5af36faf3a3629a3ad49c51b5b3be12f
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z   34 days
Failing since145774  2020-01-08 02:50:20 Z   34 days  118 attempts
Testing same since   146834  2020-02-10 18:22:42 Z0 days1 attempts


People who touched revisions under test:
  Aaron Li 
  Albecki, Mateusz 
  Amol N Sukerkar 
  Anthony PERARD 
  Ard Biesheuvel 
  Ashish Singhal 
  Bob Feng 
  Bret Barkelew 
  Brian R Haug 
  Eric Dong 
  Fan, ZhijuX 
  Guo Dong 
  Hao A Wu 
  Heng Luo 
  Jason Voelz 
  Jeff Brasen 
  Jian J Wang 
  Kinney, Michael D 
  Krzysztof Koch 
  Laszlo Ersek 
  Leif Lindholm 
  Li, Aaron 
  Liming Gao 
  Liu, Zhiguang 
  Mateusz Albecki 
  Michael D Kinney 
  Michael Kubacki 
  Pavana.K 
  Philippe Mathieu-Daud? 
  Philippe Mathieu-Daude 
  Philippe Mathieu-Daudé 
  Pierre Gondois 
  Sean Brogan 
  Siyuan Fu 
  Siyuan, Fu 
  Steven 
  Steven Shi 
  Sudipto Paul 
  Vitaly Cheptsov 
  Vitaly Cheptsov via Groups.Io 
  Wei6 Xu 
  Xu, Wei6 
  Zhichao Gao 
  Zhiguang Liu 
  Zhiju.Fan 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3162 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] PVH PCI passthrough for DomUs

2020-02-10 Thread Roman Shaposhnik
Thanks for all the background information -- this is very much appreciated!

Looking at the level of effort on this one, we ultimately decided to
stick with HVM for our usecase in Project EVE for now.

If there's a customer pressure -- we'll definitely look into picking it back up.

Thanks,
Roman.

On Mon, Jan 27, 2020 at 6:52 AM Roger Pau Monné  wrote:
>
> Forgot to set 'To:' correctly.
>
> On Mon, Jan 27, 2020 at 03:28:36PM +0100, Roger Pau Monné wrote:
> > On Mon, Jan 27, 2020 at 12:27:18PM +, Wei Liu wrote:
> > > Cc Roger
> >
> > Thanks :).
> >
> > > On Sun, Jan 19, 2020 at 11:30:42PM -0800, Roman Shaposhnik wrote:
> > > > Hi!
> > > >
> > > > I've just tried this with Xen 4.13.0 and it seems like that is still
> > > > not supported.
> >
> > No, there hasn't been much progress on this sadly.
> >
> > > > This makes me curious if anybody is working on this and whether
> > > > there's anything we can do to help accelerate the effort.
> >
> > The first step would be to get vPCI hooked into the ioreq machinery,
> > so that a domain can have devices on the emulated PCI bus handled by
> > vPCI while others are handled by external ioreq emulators. I've posted
> > a v3 of this work on September:
> >
> > https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03278.html
> >
> > But I haven't got time to go over the comments and post a new version.
> >
> > Once that's done the remaining step would be to make vPCI safe for
> > unprivileged guests. We need to assure that guests can only write to
> > specific bits of the config space, and need to limit the capabilities
> > that are exposed to the ones Xen knows to be safe to handle. This can
> > be worked by multiple people concurrently IMO, but requires step 1
> > (integration with ioreq) to be finished first.
> >
> > I'm more than happy for someone to pick any of those tasks, including
> > the integration of vPCI with the ioreq machinery. If not, I expect I
> > will be able to do some work on this in a couple of weeks, but that
> > depends on nothing else getting on fire, and me being able to flush my
> > queue of pending patches.
> >
> > Would you be up to pick some of these tasks?
> >
> > I can try to speedup the vPCI ioreq integration if there's people
> > willing to work on the remaining steps, I haven't done so because I
> > didn't see much interest in general, and I was expecting to be the
> > only one working on the remaining steps anyway.
> >
> > Regards, Roger.
> >
> > ___
> > Xen-devel mailing list
> > Xen-devel@lists.xenproject.org
> > https://lists.xenproject.org/mailman/listinfo/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 146836: regressions - FAIL

2020-02-10 Thread osstest service owner
flight 146836 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146836/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 

[Xen-devel] [xen-unstable test] 146829: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146829 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146829/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf  broken
 build-i386-xsm   broken
 build-amd64  broken
 build-amd64-pvopsbroken
 build-arm64-pvopsbroken
 build-amd64-prev broken
 build-armhf-pvopsbroken
 build-i386   broken
 build-i386-prev  broken
 build-arm64  broken
 build-amd64-xtf  broken
 build-arm64-xsm  broken
 build-amd64-xsm  broken
 build-i386-pvops broken
 build-i3864 host-install(4)broken REGR. vs. 146796
 build-amd64   4 host-install(4)broken REGR. vs. 146796
 build-i386-pvops  4 host-install(4)broken REGR. vs. 146796
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 146796
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 146796
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 146796
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 146796
 build-i386-xsm4 host-install(4)broken REGR. vs. 146796
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 146796
 build-amd64-xtf   4 host-install(4)broken REGR. vs. 146796
 build-amd64-prev  4 host-install(4)broken REGR. vs. 146796
 build-i386-prev   4 host-install(4)broken REGR. vs. 146796
 build-arm64   4 host-install(4)broken REGR. vs. 146796
 build-armhf   4 host-install(4)broken REGR. vs. 146796

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-21 

[Xen-devel] [xen-unstable-smoke test] 146838: tolerable all pass - PUSHED

2020-02-10 Thread osstest service owner
flight 146838 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146838/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  3dd724dff085e13ad520f8e35aea717db2ff07d0
baseline version:
 xen  640330d41e83af8f1b6fbe09a91712e50c411616

Last test of basis   146835  2020-02-10 19:00:53 Z0 days
Testing same since   146838  2020-02-10 22:00:35 Z0 days1 attempts


People who touched revisions under test:
  Ian Jackson 
  Julien Grall 
  Paul Durrant 
  Wei Liu 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   640330d41e..3dd724dff0  3dd724dff085e13ad520f8e35aea717db2ff07d0 -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 146835: tolerable all pass - PUSHED

2020-02-10 Thread osstest service owner
flight 146835 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146835/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  640330d41e83af8f1b6fbe09a91712e50c411616
baseline version:
 xen  270ff9a835fb4bcfead85a84d0f384b77bde93c0

Last test of basis   146832  2020-02-10 16:00:51 Z0 days
Testing same since   146835  2020-02-10 19:00:53 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  George Dunlap 
  Jan Beulich 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   270ff9a835..640330d41e  640330d41e83af8f1b6fbe09a91712e50c411616 -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [libvirt test] 146827: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146827 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146827/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvopsbroken
 build-armhf-pvopsbroken
 build-arm64  broken
 build-i386-xsm   broken
 build-i386-pvops broken
 build-armhf  broken
 build-amd64  broken
 build-arm64-xsm  broken
 build-amd64-xsm  broken
 build-amd64-pvopsbroken
 build-i386   broken
 build-i386-xsm4 host-install(4)broken REGR. vs. 146182
 build-i3864 host-install(4)broken REGR. vs. 146182
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 146182
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 146182
 build-amd64   4 host-install(4)broken REGR. vs. 146182
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 146182
 build-i386-pvops  4 host-install(4)broken REGR. vs. 146182
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 146182
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 146182
 build-arm64   4 host-install(4)broken REGR. vs. 146182
 build-armhf   4 host-install(4)broken REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  cebb468ef5e82b8d4253e27ef70c67812cf93c5a
baseline version:
 libvirt  a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z   24 days
Failing since146211  2020-01-18 04:18:52 Z   23 days   24 attempts
Testing same since   146817  2020-02-09 04:18:40 Z1 days2 attempts


People who touched revisions under test:
  Andrea Bolognani 
  Boris Fiuczynski 
  Christian Ehrhardt 
  Daniel Henrique Barboza 
  Daniel P. Berrangé 
  Dario Faggioli 
  Erik Skultety 
  Han Han 
  Jim Fehlig 
  Jiri Denemark 
  Jonathon Jongsma 
  Julio Faracco 
  Ján Tomko 
  Laine Stump 
  Marek Marczykowski-Górecki 
  Michal Privoznik 
  Nikolay Shirokovskiy 
  Pavel Hrdina 
  Peter Krempa 
  Richard W.M. Jones 
  Sahid Orentino Ferdjaoui 
  Stefan Berger 
  Stefan Berger 
  Thomas Huth 
  zhenwei pi 

jobs:
 build-amd64-xsm  broken  
 build-arm64-xsm  broken  
 build-i386-xsm   broken  
 build-amd64  broken  
 build-arm64  broken  
 build-armhf  broken  
 build-i386   broken  
 build-amd64-libvirt  blocked 
 build-arm64-libvirt  blocked 
 build-armhf-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopsbroken  
 build-arm64-pvops 

Re: [Xen-devel] Xen-unstable: pci-passthrough regression bisected to: x86/smp: use APIC ALLBUT destination shorthand when possible

2020-02-10 Thread Sander Eikelenboom
On 03/02/2020 14:21, Roger Pau Monné wrote:
> On Mon, Feb 03, 2020 at 01:44:06PM +0100, Sander Eikelenboom wrote:
>> On 03/02/2020 13:41, Roger Pau Monné wrote:
>>> On Mon, Feb 03, 2020 at 01:30:55PM +0100, Sander Eikelenboom wrote:
 On 03/02/2020 13:23, Roger Pau Monné wrote:
> On Mon, Feb 03, 2020 at 09:33:51AM +0100, Sander Eikelenboom wrote:
>> Hi Roger,
>>
>> Last week I encountered an issue with the PCI-passthrough of a USB 
>> controller. 
>> In the guest I get:
>> [ 1143.313756] xhci_hcd :00:05.0: xHCI host not responding to 
>> stop endpoint command.
>> [ 1143.334825] xhci_hcd :00:05.0: xHCI host controller not 
>> responding, assume dead
>> [ 1143.347364] xhci_hcd :00:05.0: HC died; cleaning up
>> [ 1143.356407] usb 1-2: USB disconnect, device number 2
>>
>> Bisection turned up as the culprit: 
>>commit 5500d265a2a8fa63d60c08beb549de8ec82ff7a5
>>x86/smp: use APIC ALLBUT destination shorthand when possible
>
> Sorry to hear that, let see if we can figure out what's wrong.

 No problem, that is why I test stuff :)

>> I verified by reverting that commit and now it works fine again.
>
> Does the same controller work fine when used in dom0?

 Will test that, but as all other pci devices in dom0 work fine,
 I assume this controller would also work fine in dom0 (as it has also
 worked fine for ages with PCI-passthrough to that guest and still works
 fine when reverting the referenced commit).
>>>
>>> Is this the only device that fails to work when doing pci-passthrough,
>>> or other devices also don't work with the mentioned change applied?
>>>
>>> Have you tested on other boxes?
>>>
 I don't know if your change can somehow have a side effect
 on latency around the processing of pci-passthrough ?
>>>
>>> Hm, the mentioned commit should speed up broadcast IPIs, but I don't
>>> see how it could slow down other interrupts. Also I would think the
>>> domain is not receiving interrupts from the device, rather than
>>> interrupts being slow.
>>>
>>> Can you also paste the output of lspci -v for that xHCI device from
>>> dom0?
>>>
>>> Thanks, Roger.
>>
>> Will do this evening including the testing in dom0 etc.
>> Will also see if there is any pattern when observing /proc/interrupts in
>> the guest.
> 
> Thanks! I also have some trivial patch that I would like you to try,
> just to discard send_IPI_mask clearing the scratch_cpumask under
> another function feet.
> 
> Roger.

Hi Roger,

Took a while, but I was able to run some tests now.

I also forgot a detail in the first report (probably still a bit tired from 
FOSDEM), 
namely: the device passedthrough works OK for a while before I get the kernel 
message.

I tested the patch and it looks like it makes the issue go away,
I tested for a day, while without the patch (or revert of the commit) the device
will give problems within a few hours.

lspci output from dom0 for this device is below.

--
Sander




lspci -vvvknn -s 08:00.0
08:00.0 USB controller [0c03]: NEC Corporation uPD720200 USB 3.0 Host 
Controller [1033:0194] (rev 03) (prog-if 30 [XHCI])
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard [1043:8413]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR-  ---
> diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
> index 65eb7cbda8..aeeb506155 100644
> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -66,7 +66,8 @@ static void send_IPI_shortcut(unsigned int shortcut, int 
> vector,
>  void send_IPI_mask(const cpumask_t *mask, int vector)
>  {
>  bool cpus_locked = false;
> -cpumask_t *scratch = this_cpu(scratch_cpumask);
> +static DEFINE_PER_CPU(cpumask_t, send_ipi_cpumask);
> +cpumask_t *scratch = _cpu(send_ipi_cpumask);
>  
>  /*
>   * This can only be safely used when no CPU hotplug or unplug operations
> 


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 3/4] xen: teach KASAN about grant tables

2020-02-10 Thread Boris Ostrovsky


On 2/7/20 9:26 AM, Sergey Dyasli wrote:
> From: Ross Lagerwall 
>
> Otherwise it produces lots of false positives when a guest starts using
> PV I/O devices.
>
> Signed-off-by: Ross Lagerwall 
> Signed-off-by: Sergey Dyasli 
>


Reviewed-by: Boris Ostrovsky 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/4] x86/xen: add basic KASAN support for PV kernel

2020-02-10 Thread Boris Ostrovsky


On 2/7/20 9:26 AM, Sergey Dyasli wrote:
> Introduce and use xen_kasan_* functions that are needed to properly
> initialise KASAN for Xen PV domains. Disable instrumentation for files
> that are used by xen_start_kernel() before kasan_early_init() could
> be called.
>
> This enables to use Outline instrumentation for Xen PV kernels.
> KASAN_INLINE and KASAN_VMALLOC options currently lead to boot crashes
> and hence disabled.
>
> Signed-off-by: Sergey Dyasli 

Xen bits:

Reviewed-by: Boris Ostrovsky 



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 7/7] x86/tlb: use Xen L0 assisted TLB flush when available

2020-02-10 Thread Wei Liu
On Mon, Feb 10, 2020 at 06:28:29PM +0100, Roger Pau Monne wrote:
> Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes.
> This greatly increases the performance of TLB flushes when running
> with a high amount of vCPUs as a Xen guest, and is specially important
> when running in shim mode.
> 
> The following figures are from a PV guest running `make -j32 xen` in
> shim mode with 32 vCPUs and HAP.
> 
> Using x2APIC and ALLBUT shorthand:
> real  4m35.973s
> user  4m35.110s
> sys   36m24.117s
> 
> Using L0 assisted flush:
> real1m2.596s
> user4m34.818s
> sys 5m16.374s
> 
> The implementation adds a new hook to hypervisor_ops so other
> enlightenments can also implement such assisted flush just by filling
> the hook. Note that the Xen implementation completely ignores the
> dirty CPU mask and the linear address passed in, and always performs a
> global TLB flush on all vCPUs.
> 
> Signed-off-by: Roger Pau Monné 

Reviewed-by: Wei Liu 

One remark below.

[...]
>  const struct hypervisor_ops *__init xg_probe(void)
> diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
> index 65eb7cbda8..9bc925616a 100644
> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -15,6 +15,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -256,6 +257,16 @@ void flush_area_mask(const cpumask_t *mask, const void 
> *va, unsigned int flags)
>  if ( (flags & ~FLUSH_ORDER_MASK) &&
>   !cpumask_subset(mask, cpumask_of(cpu)) )
>  {
> +if ( cpu_has_hypervisor &&
> + !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID |
> + FLUSH_ORDER_MASK)) &&
> + !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) )

I would like pass in the flag as a whole because Hyper-V has the
capability to fine tune what gets flushed. I can see FLUSH_TLB_GLOBAL
being used in that situation.

There is no need to change your patch though. I can submit a patch
myself to change this interface once your series is accepted.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 6/7] xen/guest: prepare hypervisor ops to use alternative calls

2020-02-10 Thread Wei Liu
On Mon, Feb 10, 2020 at 06:28:28PM +0100, Roger Pau Monne wrote:
> Adapt the hypervisor ops framework so it can be used with the
> alternative calls framework. So far no hooks are modified to make use
> of the alternatives patching, as they are not in any hot path.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné 

Reviewed-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 146826: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146826 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146826/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf  broken
 build-amd64-xsm  broken
 build-amd64-pvopsbroken
 build-arm64-xsm  broken
 build-armhf-pvopsbroken
 build-amd64  broken
 build-i386-xsm   broken
 build-i386-pvops broken
 build-arm64  broken
 build-i386   broken
 build-arm64-pvopsbroken
 build-i386-pvops  4 host-install(4)broken REGR. vs. 144861
 build-i386-xsm4 host-install(4)broken REGR. vs. 144861
 build-i3864 host-install(4)broken REGR. vs. 144861
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 144861
 build-amd64   4 host-install(4)broken REGR. vs. 144861
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 144861
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 144861
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 144861
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 144861
 build-arm64   4 host-install(4)broken REGR. vs. 144861
 build-armhf   4 host-install(4)broken REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked 

Re: [Xen-devel] [PATCH] x86/pvh: Adjust dom0's starting state

2020-02-10 Thread Wei Liu
On Mon, Feb 10, 2020 at 06:39:21PM +, Andrew Cooper wrote:
> Fixes: b25fb1a04e "xen/pvh: Fix segment selector ABI"
> Signed-off-by: Andrew Cooper 

Reviewed-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 5/7] x86/tlb: allow disabling the TLB clock

2020-02-10 Thread Wei Liu
On Mon, Feb 10, 2020 at 06:28:27PM +0100, Roger Pau Monne wrote:
> The TLB clock is helpful when running Xen on bare metal because when
> doing a TLB flush each CPU is IPI'ed and can keep a timestamp of the
> last flush.
> 
> This is not the case however when Xen is running virtualized, and the
> underlying hypervisor provides mechanism to assist in performing TLB
> flushes: Xen itself for example offers a HVMOP_flush_tlbs hypercall in
> order to perform a TLB flush without having to IPI each CPU. When
> using such mechanisms it's no longer possible to keep a timestamp of
> the flushes on each CPU, as they are performed by the underlying
> hypervisor.
> 
> Offer a boolean in order to signal Xen that the timestamped TLB
> shouldn't be used. This avoids keeping the timestamps of the flushes,
> and also forces NEED_FLUSH to always return true.
> 
> No functional change intended, as this change doesn't introduce any
> user that disables the timestamped TLB.
> 
> Signed-off-by: Roger Pau Monné 

Reviewed-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3] xen/arm: Handle unimplemented VGICv3 dist registers as RAZ/WI

2020-02-10 Thread Jeff Kubascik
Hey Julien,

On 2/8/2020 7:05 AM, Julien Grall wrote:
> Hi Jeff,
> 
> As you now handle GICR register, I would drop "dist" from the title.
> 

Good catch, I missed this in the title.

> On 04/02/2020 19:51, Jeff Kubascik wrote:
>> Per the ARM Generic Interrupt Controller Architecture Specification (ARM
>> IHI 0069E), reserved registers should generally be treated as RAZ/WI.
>> To simplify the VGICv3 design and improve guest compatibility, treat the
>> default case for GICD and GICR registers as read_as_zero/write_ignore.
>>
>> Signed-off-by: Jeff Kubascik 
> 
> Acked-by: Julien Grall 
> 
> I will update the commit title while committing the patch.

Thank you!

>> ---
>> Changes in v3:
>> - Fixed spelling error in commit message
>> - Dropped misleading comments that were added in v2
>> - Added printk back in for default case
>> - Implemented RAZ/WI for the redist registers as well
>> - Update commit message to include GICR scope
>> ---
>>   xen/arch/arm/vgic-v3.c | 22 --
>>   1 file changed, 16 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
>> index 422b94f902..4e60ba15cc 100644
>> --- a/xen/arch/arm/vgic-v3.c
>> +++ b/xen/arch/arm/vgic-v3.c
>> @@ -320,7 +320,7 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
>> mmio_info_t *info,
>>   printk(XENLOG_G_ERR
>>  "%pv: vGICR: unhandled read r%d offset %#08x\n",
>>  v, dabt.reg, gicr_reg);
>> -return 0;
>> +goto read_as_zero;
>>   }
>>   bad_width:
>>   printk(XENLOG_G_ERR "%pv vGICR: bad read width %d r%d offset %#08x\n",
>> @@ -337,6 +337,10 @@ read_as_zero_32:
>>   *r = 0;
>>   return 1;
>>
>> +read_as_zero:
>> +*r = 0;
>> +return 1;
>> +
>>   read_impl_defined:
>>   printk(XENLOG_G_DEBUG
>>  "%pv: vGICR: RAZ on implementation defined register offset 
>> %#08x\n",
>> @@ -638,7 +642,7 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu 
>> *v, mmio_info_t *info,
>>   default:
>>   printk(XENLOG_G_ERR "%pv: vGICR: unhandled write r%d offset 
>> %#08x\n",
>>  v, dabt.reg, gicr_reg);
>> -return 0;
>> +goto write_ignore;
>>   }
>>   bad_width:
>>   printk(XENLOG_G_ERR
>> @@ -654,6 +658,9 @@ write_ignore_32:
>>   if ( dabt.size != DABT_WORD ) goto bad_width;
>>   return 1;
>>
>> +write_ignore:
>> +return 1;
>> +
>>   write_impl_defined:
>>   printk(XENLOG_G_DEBUG
>>  "%pv: vGICR: WI on implementation defined register offset 
>> %#08x\n",
>> @@ -925,7 +932,7 @@ static int vgic_v3_rdistr_sgi_mmio_read(struct vcpu *v, 
>> mmio_info_t *info,
>>   printk(XENLOG_G_ERR
>>  "%pv: vGICR: SGI: unhandled read r%d offset %#08x\n",
>>  v, dabt.reg, gicr_reg);
>> -return 0;
>> +goto read_as_zero;
>>   }
>>   bad_width:
>>   printk(XENLOG_G_ERR "%pv: vGICR: SGI: bad read width %d r%d offset 
>> %#08x\n",
>> @@ -1002,7 +1009,7 @@ static int vgic_v3_rdistr_sgi_mmio_write(struct vcpu 
>> *v, mmio_info_t *info,
>>   printk(XENLOG_G_ERR
>>  "%pv: vGICR: SGI: unhandled write r%d offset %#08x\n",
>>  v, dabt.reg, gicr_reg);
>> -return 0;
>> +goto write_ignore;
>>   }
>>
>>   bad_width:
>> @@ -1014,6 +1021,9 @@ bad_width:
>>   write_ignore_32:
>>   if ( dabt.size != DABT_WORD ) goto bad_width;
>>   return 1;
>> +
>> +write_ignore:
>> +return 1;
>>   }
>>
>>   static struct vcpu *get_vcpu_from_rdist(struct domain *d,
>> @@ -1252,7 +1262,7 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, 
>> mmio_info_t *info,
>>   default:
>>   printk(XENLOG_G_ERR "%pv: vGICD: unhandled read r%d offset 
>> %#08x\n",
>>  v, dabt.reg, gicd_reg);
>> -return 0;
>> +goto read_as_zero;
>>   }
>>
>>   bad_width:
>> @@ -1438,7 +1448,7 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, 
>> mmio_info_t *info,
>>   printk(XENLOG_G_ERR
>>  "%pv: vGICD: unhandled write r%d=%"PRIregister" offset 
>> %#08x\n",
>>  v, dabt.reg, r, gicd_reg);
>> -return 0;
>> +goto write_ignore;
>>   }
>>
>>   bad_width:
>>
> 
> Cheers,
> 
> --
> Julien Grall
> 

Sincerely,
Jeff Kubascik

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v8 1/5] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries

2020-02-10 Thread Tamas K Lengyel
The owner domain of shared pages is dom_cow, use that for get_page
otherwise the function fails to return the correct page under some
situations. The check if dom_cow should be used was only performed in
a subset of use-cases. Fixing the error and simplifying the existing check
since we can't have any shared entries with dom_cow being NULL.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/p2m.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fd9f09536d..2c0bb7e869 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -574,11 +574,12 @@ struct page_info *p2m_get_page_from_gfn(
 if ( fdom == NULL )
 page = NULL;
 }
-else if ( !get_page(page, p2m->domain) &&
-  /* Page could be shared */
-  (!dom_cow || !p2m_is_shared(*t) ||
-   !get_page(page, dom_cow)) )
-page = NULL;
+else
+{
+struct domain *d = !p2m_is_shared(*t) ? p2m->domain : dom_cow;
+if ( !get_page(page, d) )
+page = NULL;
+}
 }
 p2m_read_unlock(p2m);
 
@@ -594,8 +595,9 @@ struct page_info *p2m_get_page_from_gfn(
 mfn = get_gfn_type_access(p2m, gfn_x(gfn), t, a, q, NULL);
 if ( p2m_is_ram(*t) && mfn_valid(mfn) )
 {
+struct domain *d = !p2m_is_shared(*t) ? p2m->domain : dom_cow;
 page = mfn_to_page(mfn);
-if ( !get_page(page, p2m->domain) )
+if ( !get_page(page, d) )
 page = NULL;
 }
 put_gfn(p2m->domain, gfn_x(gfn));
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v8 0/5] VM forking

2020-02-10 Thread Tamas K Lengyel
The following series implements VM forking for Intel HVM guests to allow for
the fast creation of identical VMs without the assosciated high startup costs
of booting or restoring the VM from a savefile.

JIRA issue: https://xenproject.atlassian.net/browse/XEN-89

The fork operation is implemented as part of the "xl fork-vm" command:
xl fork-vm -C  -Q  

By default a fully functional fork is created. The user is in charge however to
create the appropriate config file for the fork and to generate the QEMU save
file before the fork-vm call is made. The config file needs to give the
fork a new name at minimum but other settings may also require changes.

The interface also allows to split the forking into two steps:
xl fork-vm --launch-dm no \
   -p 
xl fork-vm --launch-dm late \
   -C  \
   -Q  \
   

The split creation model is useful when the VM needs to be created as fast as
possible. The forked VM can be unpaused without the device model being launched
to be monitored and accessed via VMI. Note however that without its device
model running (depending on what is executing in the VM) it is bound to
misbehave or even crash when its trying to access devices that would be
emulated by QEMU. We anticipate that for certain use-cases this would be an
acceptable situation, in case for example when fuzzing is performed of code
segments that don't access such devices.

Launching the device model requires the QEMU Xen savefile to be generated
manually from the parent VM. This can be accomplished simply by connecting to
its QMP socket and issuing the "xen-save-devices-state" command. For example
using the standard tool socat these commands can be used to generate the file:
socat - UNIX-CONNECT:/var/run/xen/qmp-libxl-
{ "execute": "qmp_capabilities" }
{ "execute": "xen-save-devices-state", \
"arguments": { "filename": "/path/to/save/qemu_state", \
"live": false} }

At runtime the forked VM starts running with an empty p2m which gets lazily
populated when the VM generates EPT faults, similar to how altp2m views are
populated. If the memory access is a read-only access, the p2m entry is
populated with a memory shared entry with its parent. For write memory accesses
or in case memory sharing wasn't possible (for example in case a reference is
held by a third party), a new page is allocated and the page contents are
copied over from the parent VM. Forks can be further forked if needed, thus
allowing for further memory savings.

A VM fork reset hypercall is also added that allows the fork to be reset to the
state it was just after a fork, also accessible via xl:
xl fork-vm --fork-reset -p 

This is an optimization for cases where the forks are very short-lived and run
without a device model, so resetting saves some time compared to creating a
brand new fork provided the fork has not aquired a lot of memory. If the fork
has a lot of memory deduplicated it is likely going to be faster to create a
new fork from scratch and asynchronously destroying the old one.

The series has been tested with both Linux and Windows VMs and functions as
expected. VM forking time has been measured to be 0.0007s, device model launch
to be around 1s depending largely on the number of devices being emulated. Fork
resets have been measured to be 0.0001s under the optimal circumstances.

New in v8: rebase on staging and a minor fix in the toolstack code

Patch 1 is a bugfix for pre-existing code in p2m

Patch 2 exposes a hap internal function that will be used during VM forking

Patch 3-4 implements the VM fork & reset operation hypervisor side bits

Patch 5 adds the toolstack-side code implementing VM forking and reset

Tamas K Lengyel (5):
  x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  xen/x86: Make hap_get_allocation accessible
  xen/mem_sharing: VM forking
  x86/mem_sharing: reset a fork
  xen/tools: VM forking toolstack side

 docs/man/xl.1.pod.in  |  36 
 tools/libxc/include/xenctrl.h |  13 ++
 tools/libxc/xc_memshr.c   |  22 +++
 tools/libxl/libxl.h   |   7 +
 tools/libxl/libxl_create.c| 256 -
 tools/libxl/libxl_dm.c|   2 +-
 tools/libxl/libxl_dom.c   |  43 -
 tools/libxl/libxl_internal.h  |   1 +
 tools/libxl/libxl_types.idl   |   1 +
 tools/xl/xl.h |   5 +
 tools/xl/xl_cmdtable.c|  12 ++
 tools/xl/xl_saverestore.c |  97 ++
 tools/xl/xl_vmcontrol.c   |   8 +
 xen/arch/x86/domain.c |  11 ++
 xen/arch/x86/hvm/hvm.c|   2 +-
 xen/arch/x86/mm/hap/hap.c |   3 +-
 xen/arch/x86/mm/mem_sharing.c | 297 ++
 xen/arch/x86/mm/p2m.c |  25 ++-
 xen/include/asm-x86/hap.h |   1 +
 xen/include/asm-x86/mem_sharing.h |  17 ++
 xen/include/public/memory.h   |   6 +
 xen/include/xen/sched.h   

[Xen-devel] [PATCH v8 2/5] xen/x86: Make hap_get_allocation accessible

2020-02-10 Thread Tamas K Lengyel
During VM forking we'll copy the parent domain's parameters to the client,
including the HAP shadow memory setting that is used for storing the domain's
EPT. We'll copy this in the hypervisor instead doing it during toolstack launch
to allow the domain to start executing and unsharing memory before (or
even completely without) the toolstack.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/hap/hap.c | 3 +--
 xen/include/asm-x86/hap.h | 1 +
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d93f3451c..c7c7ff6e99 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -321,8 +321,7 @@ static void hap_free_p2m_page(struct domain *d, struct 
page_info *pg)
 }
 
 /* Return the size of the pool, rounded up to the nearest MB */
-static unsigned int
-hap_get_allocation(struct domain *d)
+unsigned int hap_get_allocation(struct domain *d)
 {
 unsigned int pg = d->arch.paging.hap.total_pages
 + d->arch.paging.hap.p2m_pages;
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index b94bfb4ed0..1bf07e49fe 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -45,6 +45,7 @@ int   hap_track_dirty_vram(struct domain *d,
 
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
 int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted);
+unsigned int hap_get_allocation(struct domain *d);
 
 #endif /* XEN_HAP_H */
 
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v8 5/5] xen/tools: VM forking toolstack side

2020-02-10 Thread Tamas K Lengyel
Add necessary bits to implement "xl fork-vm" commands. The command allows the
user to specify how to launch the device model allowing for a late-launch model
in which the user can execute the fork without the device model and decide to
only later launch it.

Signed-off-by: Tamas K Lengyel 
---
v8: don't try to unpause twice when launching dm
---
 docs/man/xl.1.pod.in  |  36 +
 tools/libxc/include/xenctrl.h |  13 ++
 tools/libxc/xc_memshr.c   |  22 +++
 tools/libxl/libxl.h   |   7 +
 tools/libxl/libxl_create.c| 256 ++
 tools/libxl/libxl_dm.c|   2 +-
 tools/libxl/libxl_dom.c   |  43 +-
 tools/libxl/libxl_internal.h  |   1 +
 tools/libxl/libxl_types.idl   |   1 +
 tools/xl/xl.h |   5 +
 tools/xl/xl_cmdtable.c|  12 ++
 tools/xl/xl_saverestore.c |  97 +
 tools/xl/xl_vmcontrol.c   |   8 ++
 13 files changed, 409 insertions(+), 94 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 33ad2ebd71..c4012939f5 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -694,6 +694,42 @@ Leave the domain paused after creating the snapshot.
 
 =back
 
+=item B [I] I
+
+Create a fork of a running VM. The domain will be paused after the operation
+and needs to remain paused while forks of it exist.
+
+B
+
+=over 4
+
+=item B<-p>
+
+Leave the fork paused after creating it.
+
+=item B<--launch-dm>
+
+Specify whether the device model (QEMU) should be launched for the fork. Late
+launch allows to start the device model for an already running fork.
+
+=item B<-C>
+
+The config file to use when launching the device model. Currently required when
+launching the device model.
+
+=item B<-Q>
+
+The qemu save file to use when launching the device model.  Currently required
+when launching the device model.
+
+=item B<--fork-reset>
+
+Perform a reset operation of an already running fork. Note that resetting may
+be less performant then creating a new fork depending on how much memory the
+fork has deduplicated during its runtime.
+
+=back
+
 =item B [I]
 
 Display the number of shared pages for a specified domain. If no domain is
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index cc4eb1e3d3..6f65888dd0 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2225,6 +2225,19 @@ int xc_memshr_range_share(xc_interface *xch,
   uint64_t first_gfn,
   uint64_t last_gfn);
 
+int xc_memshr_fork(xc_interface *xch,
+   uint32_t source_domain,
+   uint32_t client_domain);
+
+/*
+ * Note: this function is only intended to be used on short-lived forks that
+ * haven't yet aquired a lot of memory. In case the fork has a lot of memory
+ * it is likely more performant to create a new fork with xc_memshr_fork.
+ *
+ * With VMs that have a lot of memory this call may block for a long time.
+ */
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t forked_domain);
+
 /* Debug calls: return the number of pages referencing the shared frame backing
  * the input argument. Should be one or greater.
  *
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 97e2e6a8d9..d0e4ee225b 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -239,6 +239,28 @@ int xc_memshr_debug_gref(xc_interface *xch,
 return xc_memshr_memop(xch, domid, );
 }
 
+int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid)
+{
+xen_mem_sharing_op_t mso;
+
+memset(, 0, sizeof(mso));
+
+mso.op = XENMEM_sharing_op_fork;
+mso.u.fork.parent_domain = pdomid;
+
+return xc_memshr_memop(xch, domid, );
+}
+
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t domid)
+{
+xen_mem_sharing_op_t mso;
+
+memset(, 0, sizeof(mso));
+mso.op = XENMEM_sharing_op_fork_reset;
+
+return xc_memshr_memop(xch, domid, );
+}
+
 int xc_memshr_audit(xc_interface *xch)
 {
 xen_mem_sharing_op_t mso;
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 18c1a2d6bf..094ab0d205 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1538,6 +1538,13 @@ int libxl_domain_create_new(libxl_ctx *ctx, 
libxl_domain_config *d_config,
 const libxl_asyncop_how *ao_how,
 const libxl_asyncprogress_how *aop_console_how)
 LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t *domid)
+ LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
+uint32_t domid,
+const libxl_asyncprogress_how *aop_console_how)
+LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid);
 int libxl_domain_create_restore(libxl_ctx *ctx, 

[Xen-devel] [PATCH v8 3/5] xen/mem_sharing: VM forking

2020-02-10 Thread Tamas K Lengyel
VM forking is the process of creating a domain with an empty memory space and a
parent domain specified from which to populate the memory when necessary. For
the new domain to be functional the VM state is copied over as part of the fork
operation (HVM params, hap allocation, etc).

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/domain.c |  11 ++
 xen/arch/x86/hvm/hvm.c|   2 +-
 xen/arch/x86/mm/mem_sharing.c | 221 ++
 xen/arch/x86/mm/p2m.c |  11 +-
 xen/include/asm-x86/mem_sharing.h |  17 +++
 xen/include/public/memory.h   |   5 +
 xen/include/xen/sched.h   |   2 +
 7 files changed, 267 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index f53ae5ff86..a98e2e0479 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2189,6 +2189,17 @@ int domain_relinquish_resources(struct domain *d)
 ret = relinquish_shared_pages(d);
 if ( ret )
 return ret;
+
+/*
+ * If the domain is forked, decrement the parent's pause count
+ * and release the domain.
+ */
+if ( d->parent )
+{
+domain_unpause(d->parent);
+put_domain(d->parent);
+d->parent = NULL;
+}
 }
 #endif
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 00a9e70b7c..55520bbd23 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1915,7 +1915,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long 
gla,
 }
 #endif
 
-/* Spurious fault? PoD and log-dirty also take this path. */
+/* Spurious fault? PoD, log-dirty and VM forking also take this path. */
 if ( p2m_is_ram(p2mt) )
 {
 rc = 1;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3835bc928f..ccf338918d 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -22,6 +22,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -36,6 +37,9 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
 #include 
 
 #include "mm-locks.h"
@@ -1444,6 +1448,193 @@ static inline int mem_sharing_control(struct domain *d, 
bool enable)
 return 0;
 }
 
+/*
+ * Forking a page only gets called when the VM faults due to no entry being
+ * in the EPT for the access. Depending on the type of access we either
+ * populate the physmap with a shared entry for read-only access or
+ * fork the page if its a write access.
+ *
+ * The client p2m is already locked so we only need to lock
+ * the parent's here.
+ */
+int mem_sharing_fork_page(struct domain *d, gfn_t gfn, bool unsharing)
+{
+int rc = -ENOENT;
+shr_handle_t handle;
+struct domain *parent;
+struct p2m_domain *p2m;
+unsigned long gfn_l = gfn_x(gfn);
+mfn_t mfn, new_mfn;
+p2m_type_t p2mt;
+struct page_info *page;
+
+if ( !mem_sharing_is_fork(d) )
+return -ENOENT;
+
+parent = d->parent;
+
+if ( !unsharing )
+{
+/* For read-only accesses we just add a shared entry to the physmap */
+while ( parent )
+{
+if ( !(rc = nominate_page(parent, gfn, 0, )) )
+break;
+
+parent = parent->parent;
+}
+
+if ( !rc )
+{
+/* The client's p2m is already locked */
+struct p2m_domain *pp2m = p2m_get_hostp2m(parent);
+
+p2m_lock(pp2m);
+rc = add_to_physmap(parent, gfn_l, handle, d, gfn_l, false);
+p2m_unlock(pp2m);
+
+if ( !rc )
+return 0;
+}
+}
+
+/*
+ * If it's a write access (ie. unsharing) or if adding a shared entry to
+ * the physmap failed we'll fork the page directly.
+ */
+p2m = p2m_get_hostp2m(d);
+parent = d->parent;
+
+while ( parent )
+{
+mfn = get_gfn_query(parent, gfn_l, );
+
+if ( mfn_valid(mfn) && p2m_is_any_ram(p2mt) )
+break;
+
+put_gfn(parent, gfn_l);
+parent = parent->parent;
+}
+
+if ( !parent )
+return -ENOENT;
+
+if ( !(page = alloc_domheap_page(d, 0)) )
+{
+put_gfn(parent, gfn_l);
+return -ENOMEM;
+}
+
+new_mfn = page_to_mfn(page);
+copy_domain_page(new_mfn, mfn);
+set_gpfn_from_mfn(mfn_x(new_mfn), gfn_l);
+
+put_gfn(parent, gfn_l);
+
+return p2m->set_entry(p2m, gfn, new_mfn, PAGE_ORDER_4K, p2m_ram_rw,
+  p2m->default_access, -1);
+}
+
+static int bring_up_vcpus(struct domain *cd, struct cpupool *cpupool)
+{
+int ret;
+unsigned int i;
+
+if ( (ret = cpupool_move_domain(cd, cpupool)) )
+return ret;
+
+for ( i = 0; i < cd->max_vcpus; i++ )
+{
+if ( cd->vcpu[i] )
+continue;
+
+if ( !vcpu_create(cd, i) )
+return -EINVAL;
+}
+
+

[Xen-devel] [PATCH v8 4/5] x86/mem_sharing: reset a fork

2020-02-10 Thread Tamas K Lengyel
Implement hypercall that allows a fork to shed all memory that got allocated
for it during its execution and re-load its vCPU context from the parent VM.
This allows the forked VM to reset into the same state the parent VM is in a
faster way then creating a new fork would be. Measurements show about a 2x
speedup during normal fuzzing operations. Performance may vary depending how
much memory got allocated for the forked VM. If it has been completely
deduplicated from the parent VM then creating a new fork would likely be more
performant.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 76 +++
 xen/include/public/memory.h   |  1 +
 2 files changed, 77 insertions(+)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index ccf338918d..9d61592efa 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1635,6 +1635,59 @@ static int mem_sharing_fork(struct domain *d, struct 
domain *cd)
 return rc;
 }
 
+/*
+ * The fork reset operation is intended to be used on short-lived forks only.
+ * There is no hypercall continuation operation implemented for this reason.
+ * For forks that obtain a larger memory footprint it is likely going to be
+ * more performant to create a new fork instead of resetting an existing one.
+ *
+ * TODO: In case this hypercall would become useful on forks with larger memory
+ * footprints the hypercall continuation should be implemented.
+ */
+static int mem_sharing_fork_reset(struct domain *d, struct domain *cd)
+{
+int rc;
+struct p2m_domain* p2m = p2m_get_hostp2m(cd);
+struct page_info *page, *tmp;
+
+domain_pause(cd);
+
+page_list_for_each_safe(page, tmp, >page_list)
+{
+p2m_type_t p2mt;
+p2m_access_t p2ma;
+gfn_t gfn;
+mfn_t mfn = page_to_mfn(page);
+
+if ( !mfn_valid(mfn) )
+continue;
+
+gfn = mfn_to_gfn(cd, mfn);
+mfn = __get_gfn_type_access(p2m, gfn_x(gfn), , ,
+0, NULL, false);
+
+if ( !p2m_is_ram(p2mt) || p2m_is_shared(p2mt) )
+continue;
+
+/* take an extra reference */
+if ( !get_page(page, cd) )
+continue;
+
+rc = p2m->set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
+p2m_invalid, p2m_access_rwx, -1);
+ASSERT(!rc);
+
+put_page_alloc_ref(page);
+put_page(page);
+}
+
+if ( !(rc = hvm_copy_context_and_params(cd, d)) )
+fork_tsc(cd, d);
+
+domain_unpause(cd);
+return rc;
+}
+
 int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
 int rc;
@@ -1919,6 +1972,29 @@ int 
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 break;
 }
 
+case XENMEM_sharing_op_fork_reset:
+{
+struct domain *pd;
+
+rc = -EINVAL;
+if ( mso.u.fork._pad[0] || mso.u.fork._pad[1] ||
+ mso.u.fork._pad[2] )
+goto out;
+
+rc = -ENOSYS;
+if ( !d->parent )
+goto out;
+
+rc = rcu_lock_live_remote_domain_by_id(d->parent->domain_id, );
+if ( rc )
+goto out;
+
+rc = mem_sharing_fork_reset(pd, d);
+
+rcu_unlock_domain(pd);
+break;
+}
+
 default:
 rc = -ENOSYS;
 break;
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 90a3f4498e..e3d063e22e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -483,6 +483,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
 #define XENMEM_sharing_op_audit 7
 #define XENMEM_sharing_op_range_share   8
 #define XENMEM_sharing_op_fork  9
+#define XENMEM_sharing_op_fork_reset10
 
 #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
 #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 146832: tolerable all pass - PUSHED

2020-02-10 Thread osstest service owner
flight 146832 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146832/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  270ff9a835fb4bcfead85a84d0f384b77bde93c0
baseline version:
 xen  72dbcf0c065037dddb591a072c4f8f16fe888ea8

Last test of basis   146767  2020-02-06 17:01:03 Z4 days
Failing since146806  2020-02-08 13:00:53 Z2 days   10 attempts
Testing same since   146832  2020-02-10 16:00:51 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Christian Lindig 
  Jan Beulich 
  Jeff Kubascik 
  Julien Grall 
  Julien Grall 
  Marek Marczykowski-Górecki 
  Wei Liu 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   72dbcf0c06..270ff9a835  270ff9a835fb4bcfead85a84d0f384b77bde93c0 -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] xen/arm: Restrict access to most HVM_PARAM's

2020-02-10 Thread Andrew Cooper
ARM currently has no restrictions on toolstack and guest access to the entire
HVM_PARAM block.  As the paging/monitor/sharing features aren't under security
support, this doesn't need an XSA.

The CALLBACK_IRQ and {STORE,CONSOLE}_{PFN,EVTCHN} details exposed read-only to
the guest, while the *_RING_PFN details are restricted to only toolstack
access.  No other parameters are used.

Signed-off-by: Andrew Cooper 
---
CC: Stefano Stabellini 
CC: Julien Grall 
CC: Volodymyr Babchuk 

This is only compile tested, and based on my reading of the source.  There
might be other PARAMS needing including.
---
 xen/arch/arm/hvm.c | 65 +++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 76b27c9168..1446d4010c 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -31,6 +31,60 @@
 
 #include 
 
+static int hvm_allow_set_param(const struct domain *d, unsigned int param)
+{
+switch ( param )
+{
+/*
+ * The following parameters are intended for toolstack usage only.
+ * They may not be set by the domain.
+ *
+ * The {STORE,CONSOLE}_EVTCHN values will need to become read/write if
+ * a new ABI hasn't appeared by the time migration support is added.
+ */
+case HVM_PARAM_CALLBACK_IRQ:
+case HVM_PARAM_STORE_PFN:
+case HVM_PARAM_STORE_EVTCHN:
+case HVM_PARAM_CONSOLE_PFN:
+case HVM_PARAM_CONSOLE_EVTCHN:
+case HVM_PARAM_PAGING_RING_PFN:
+case HVM_PARAM_MONITOR_RING_PFN:
+case HVM_PARAM_SHARING_RING_PFN:
+return d == current->domain ? -EPERM : 0;
+
+/* Writeable only by Xen, hole, deprecated, or out-of-range. */
+default:
+return -EINVAL;
+}
+}
+
+static int hvm_allow_get_param(const struct domain *d, unsigned int param)
+{
+switch ( param )
+{
+/* The following parameters can be read by the guest and toolstack. */
+case HVM_PARAM_CALLBACK_IRQ:
+case HVM_PARAM_STORE_PFN:
+case HVM_PARAM_STORE_EVTCHN:
+case HVM_PARAM_CONSOLE_PFN:
+case HVM_PARAM_CONSOLE_EVTCHN:
+return 0;
+
+/*
+ * The following parameters are intended for toolstack usage only.
+ * They may not be read by the domain.
+ */
+case HVM_PARAM_PAGING_RING_PFN:
+case HVM_PARAM_MONITOR_RING_PFN:
+case HVM_PARAM_SHARING_RING_PFN:
+return d == current->domain ? -EPERM : 0;
+
+/* Hole, deprecated, or out-of-range. */
+default:
+return -EINVAL;
+}
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
@@ -46,9 +100,6 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 if ( copy_from_guest(, arg, 1) )
 return -EFAULT;
 
-if ( a.index >= HVM_NR_PARAMS )
-return -EINVAL;
-
 d = rcu_lock_domain_by_any_id(a.domid);
 if ( d == NULL )
 return -ESRCH;
@@ -59,10 +110,18 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 
 if ( op == HVMOP_set_param )
 {
+rc = hvm_allow_set_param(d, a.index);
+if ( rc )
+goto param_fail;
+
 d->arch.hvm.params[a.index] = a.value;
 }
 else
 {
+rc = hvm_allow_get_param(d, a.index);
+if ( rc )
+goto param_fail;
+
 a.value = d->arch.hvm.params[a.index];
 rc = copy_to_guest(arg, , 1) ? -EFAULT : 0;
 }
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] x86/pvh: Adjust dom0's starting state

2020-02-10 Thread Andrew Cooper
Fixes: b25fb1a04e "xen/pvh: Fix segment selector ABI"
Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Wei Liu 
CC: Roger Pau Monné 
---
 xen/arch/x86/hvm/dom0_build.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 831325150b..380412151b 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -626,10 +626,12 @@ static int __init pvh_setup_cpus(struct domain *d, 
paddr_t entry,
 .cpu_regs.x86_32.cr0 = X86_CR0_PE | X86_CR0_ET,
 .cpu_regs.x86_32.cs_limit = ~0u,
 .cpu_regs.x86_32.ds_limit = ~0u,
+.cpu_regs.x86_32.es_limit = ~0u,
 .cpu_regs.x86_32.ss_limit = ~0u,
 .cpu_regs.x86_32.tr_limit = 0x67,
 .cpu_regs.x86_32.cs_ar = 0xc9b,
 .cpu_regs.x86_32.ds_ar = 0xc93,
+.cpu_regs.x86_32.es_ar = 0xc93,
 .cpu_regs.x86_32.ss_ar = 0xc93,
 .cpu_regs.x86_32.tr_ar = 0x8b,
 };
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 146824: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146824 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146824/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvopsbroken
 build-amd64  broken
 build-i386   broken
 build-i386-pvops broken
 build-i386-xsm   broken
 build-amd64-xsm  broken
 build-i386-pvops  4 host-install(4)broken REGR. vs. 145767
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 145767
 build-amd64   4 host-install(4)broken REGR. vs. 145767
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 145767
 build-i386-xsm4 host-install(4)broken REGR. vs. 145767
 build-i3864 host-install(4)broken REGR. vs. 145767

Tests which did not succeed, but are not blocking:
 build-i386-libvirt1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf b34ed98694f053bb70d8dbe11240accf06ac23ce
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z   33 days
Failing since145774  2020-01-08 02:50:20 Z   33 days  117 attempts
Testing same since   146824  2020-02-09 23:41:28 Z0 days1 attempts


People who touched revisions under test:
  Aaron Li 
  Albecki, Mateusz 
  Amol N Sukerkar 
  Anthony PERARD 
  Ard Biesheuvel 
  Ashish Singhal 
  Bob Feng 
  Bret Barkelew 
  Brian R Haug 
  Eric Dong 
  Fan, ZhijuX 
  Guo Dong 
  Hao A Wu 
  Heng Luo 
  Jason Voelz 
  Jeff Brasen 
  Jian J Wang 
  Kinney, Michael D 
  Krzysztof Koch 
  Laszlo Ersek 
  Leif Lindholm 
  Li, Aaron 
  Liming Gao 
  Liu, Zhiguang 
  Mateusz Albecki 
  Michael D Kinney 
  Michael Kubacki 
  Pavana.K 
  Philippe Mathieu-Daud? 
  Philippe Mathieu-Daude 
  Pierre Gondois 
  Sean Brogan 
  Siyuan Fu 
  Siyuan, Fu 
  Steven 
  Steven Shi 
  Sudipto Paul 
  Vitaly Cheptsov 
  Vitaly Cheptsov via Groups.Io 
  Wei6 Xu 
  Xu, Wei6 
  Zhichao Gao 
  Zhiguang Liu 
  Zhiju.Fan 

jobs:
 build-amd64-xsm  broken  
 build-i386-xsm   broken  
 build-amd64  broken  
 build-i386   broken  
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopsbroken  
 build-i386-pvops broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64-pvops broken
broken-job build-amd64 broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-job build-amd64-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)

Not pushing.

(No revision log; it would be 3134 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2 4/4] AMD/IOMMU: Treat head/tail pointers as byte offsets

2020-02-10 Thread Andrew Cooper
The MMIO registers as already byte offsets.  Using them in this form removes
the need to shift their values for use.

It is also inefficient to store both entries and alloc_size (which only differ
by entry_size).  Rename alloc_size to size, and drop entries entirely, which
simplifies the allocation/deallocation helpers slightly.

Mark send_iommu_command() and invalidate_iommu_all() as static, as they have
no external declaration or callers.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Wei Liu 
CC: Roger Pau Monné 

v2:
 * Mask head/tail pointers
 * Drop unnecessary cast.

Tested on a Rome system.
---
 xen/drivers/passthrough/amd/iommu-defs.h |  1 -
 xen/drivers/passthrough/amd/iommu.h  | 18 ++
 xen/drivers/passthrough/amd/iommu_cmd.c  | 21 +
 xen/drivers/passthrough/amd/iommu_init.c | 32 ++--
 4 files changed, 25 insertions(+), 47 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu-defs.h 
b/xen/drivers/passthrough/amd/iommu-defs.h
index 963009de6a..50613ca150 100644
--- a/xen/drivers/passthrough/amd/iommu-defs.h
+++ b/xen/drivers/passthrough/amd/iommu-defs.h
@@ -481,7 +481,6 @@ struct amd_iommu_pte {
 #define INV_IOMMU_ALL_PAGES_ADDRESS  ((1ULL << 63) - 1)
 
 #define IOMMU_RING_BUFFER_PTR_MASK  0x0007FFF0
-#define IOMMU_RING_BUFFER_PTR_SHIFT 4
 
 #define IOMMU_CMD_DEVICE_ID_MASK0x
 #define IOMMU_CMD_DEVICE_ID_SHIFT   0
diff --git a/xen/drivers/passthrough/amd/iommu.h 
b/xen/drivers/passthrough/amd/iommu.h
index 0b598d06f8..1abfdc685a 100644
--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -58,12 +58,11 @@ struct table_struct {
 };
 
 struct ring_buffer {
+spinlock_t lock;/* protect buffer pointers */
 void *buffer;
-unsigned long entries;
-unsigned long alloc_size;
 uint32_t tail;
 uint32_t head;
-spinlock_t lock;/* protect buffer pointers */
+uint32_t size;
 };
 
 typedef struct iommu_cap {
@@ -379,19 +378,6 @@ static inline int iommu_has_cap(struct amd_iommu *iommu, 
uint32_t bit)
 return !!(iommu->cap.header & (1u << bit));
 }
 
-/* access tail or head pointer of ring buffer */
-static inline uint32_t iommu_get_rb_pointer(uint32_t reg)
-{
-return get_field_from_reg_u32(reg, IOMMU_RING_BUFFER_PTR_MASK,
-  IOMMU_RING_BUFFER_PTR_SHIFT);
-}
-
-static inline void iommu_set_rb_pointer(uint32_t *reg, uint32_t val)
-{
-set_field_in_reg_u32(val, *reg, IOMMU_RING_BUFFER_PTR_MASK,
- IOMMU_RING_BUFFER_PTR_SHIFT, reg);
-}
-
 /* access device id field from iommu cmd */
 static inline uint16_t iommu_get_devid_from_cmd(uint32_t cmd)
 {
diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c 
b/xen/drivers/passthrough/amd/iommu_cmd.c
index 1eae339692..249ed345a0 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -24,16 +24,15 @@ static int queue_iommu_command(struct amd_iommu *iommu, u32 
cmd[])
 {
 uint32_t tail, head;
 
-tail = iommu->cmd_buffer.tail;
-if ( ++tail == iommu->cmd_buffer.entries )
+tail = iommu->cmd_buffer.tail + IOMMU_CMD_BUFFER_ENTRY_SIZE;
+if ( tail == iommu->cmd_buffer.size )
 tail = 0;
 
-head = iommu_get_rb_pointer(readl(iommu->mmio_base +
-  IOMMU_CMD_BUFFER_HEAD_OFFSET));
+head = readl(iommu->mmio_base +
+ IOMMU_CMD_BUFFER_HEAD_OFFSET) & IOMMU_RING_BUFFER_PTR_MASK;
 if ( head != tail )
 {
-memcpy(iommu->cmd_buffer.buffer +
-   (iommu->cmd_buffer.tail * IOMMU_CMD_BUFFER_ENTRY_SIZE),
+memcpy(iommu->cmd_buffer.buffer + iommu->cmd_buffer.tail,
cmd, IOMMU_CMD_BUFFER_ENTRY_SIZE);
 
 iommu->cmd_buffer.tail = tail;
@@ -45,13 +44,11 @@ static int queue_iommu_command(struct amd_iommu *iommu, u32 
cmd[])
 
 static void commit_iommu_command_buffer(struct amd_iommu *iommu)
 {
-u32 tail = 0;
-
-iommu_set_rb_pointer(, iommu->cmd_buffer.tail);
-writel(tail, iommu->mmio_base+IOMMU_CMD_BUFFER_TAIL_OFFSET);
+writel(iommu->cmd_buffer.tail,
+   iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET);
 }
 
-int send_iommu_command(struct amd_iommu *iommu, u32 cmd[])
+static int send_iommu_command(struct amd_iommu *iommu, u32 cmd[])
 {
 if ( queue_iommu_command(iommu, cmd) )
 {
@@ -261,7 +258,7 @@ static void invalidate_interrupt_table(struct amd_iommu 
*iommu, u16 device_id)
 send_iommu_command(iommu, cmd);
 }
 
-void invalidate_iommu_all(struct amd_iommu *iommu)
+static void invalidate_iommu_all(struct amd_iommu *iommu)
 {
 u32 cmd[4], entry;
 
diff --git a/xen/drivers/passthrough/amd/iommu_init.c 
b/xen/drivers/passthrough/amd/iommu_init.c
index 5544dd9505..c42b608f07 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -117,7 

[Xen-devel] [PATCH v4 6/7] xen/guest: prepare hypervisor ops to use alternative calls

2020-02-10 Thread Roger Pau Monne
Adapt the hypervisor ops framework so it can be used with the
alternative calls framework. So far no hooks are modified to make use
of the alternatives patching, as they are not in any hot path.

No functional change intended.

Signed-off-by: Roger Pau Monné 
---
Changes since v3:
 - New in this version.
---
 xen/arch/x86/guest/hyperv/hyperv.c |  2 +-
 xen/arch/x86/guest/hypervisor.c| 41 +++---
 xen/arch/x86/guest/xen/xen.c   |  2 +-
 3 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
b/xen/arch/x86/guest/hyperv/hyperv.c
index fabc62b0d6..70f4cd5ae0 100644
--- a/xen/arch/x86/guest/hyperv/hyperv.c
+++ b/xen/arch/x86/guest/hyperv/hyperv.c
@@ -199,7 +199,7 @@ static void __init e820_fixup(struct e820map *e820)
 panic("Unable to reserve Hyper-V hypercall range\n");
 }
 
-static const struct hypervisor_ops ops = {
+static const struct hypervisor_ops __initdata ops = {
 .name = "Hyper-V",
 .setup = setup,
 .ap_setup = ap_setup,
diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c
index 5fd433c8d4..647cdb1367 100644
--- a/xen/arch/x86/guest/hypervisor.c
+++ b/xen/arch/x86/guest/hypervisor.c
@@ -24,52 +24,53 @@
 #include 
 #include 
 
-static const struct hypervisor_ops *__read_mostly ops;
+static struct hypervisor_ops __read_mostly ops;
 
 const char *__init hypervisor_probe(void)
 {
+const struct hypervisor_ops *fns;
+
 if ( !cpu_has_hypervisor )
 return NULL;
 
-ops = xg_probe();
-if ( ops )
-return ops->name;
+fns = xg_probe();
+if ( !fns )
+/*
+ * Detection of Hyper-V must come after Xen to avoid false positive due
+ * to viridian support
+ */
+fns = hyperv_probe();
 
-/*
- * Detection of Hyper-V must come after Xen to avoid false positive due
- * to viridian support
- */
-ops = hyperv_probe();
-if ( ops )
-return ops->name;
+if ( fns )
+ops = *fns;
 
-return NULL;
+return ops.name;
 }
 
 void __init hypervisor_setup(void)
 {
-if ( ops && ops->setup )
-ops->setup();
+if ( ops.setup )
+ops.setup();
 }
 
 int hypervisor_ap_setup(void)
 {
-if ( ops && ops->ap_setup )
-return ops->ap_setup();
+if ( ops.ap_setup )
+return ops.ap_setup();
 
 return 0;
 }
 
 void hypervisor_resume(void)
 {
-if ( ops && ops->resume )
-ops->resume();
+if ( ops.resume )
+ops.resume();
 }
 
 void __init hypervisor_e820_fixup(struct e820map *e820)
 {
-if ( ops && ops->e820_fixup )
-ops->e820_fixup(e820);
+if ( ops.e820_fixup )
+ops.e820_fixup(e820);
 }
 
 /*
diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c
index 3cf8f667a1..f151b07548 100644
--- a/xen/arch/x86/guest/xen/xen.c
+++ b/xen/arch/x86/guest/xen/xen.c
@@ -324,7 +324,7 @@ static void __init e820_fixup(struct e820map *e820)
 pv_shim_fixup_e820(e820);
 }
 
-static const struct hypervisor_ops ops = {
+static const struct hypervisor_ops __initdata ops = {
 .name = "Xen",
 .setup = setup,
 .ap_setup = ap_setup,
-- 
2.25.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 4/7] x86/tlb: introduce a flush guests TLB flag

2020-02-10 Thread Roger Pau Monne
Introduce a specific flag to request a HVM guest TLB flush, which is
an ASID/VPID tickle that forces a linear TLB flush for all HVM guests.

This was previously unconditionally done in each pre_flush call, but
that's not required: HVM guests not using shadow don't require linear
TLB flushes as Xen doesn't modify the guest page tables in that case
(ie: when using HAP).

Modify all shadow code TLB flushes to also flush the guest TLB, in
order to keep the previous behavior. I haven't looked at each specific
shadow code TLB flush in order to figure out whether it actually
requires a guest TLB flush or not, so there might be room for
improvement in that regard.

Signed-off-by: Roger Pau Monné 
Reviewed-by: Wei Liu 
---
 xen/arch/x86/flushtlb.c |  5 +++--
 xen/arch/x86/mm/shadow/common.c | 18 +-
 xen/arch/x86/mm/shadow/hvm.c|  2 +-
 xen/arch/x86/mm/shadow/multi.c  | 16 
 xen/include/asm-x86/flushtlb.h  |  2 ++
 5 files changed, 23 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c
index 03f92c23dc..e7ccd4ec7b 100644
--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -59,8 +59,6 @@ static u32 pre_flush(void)
 raise_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ);
 
  skip_clocktick:
-hvm_flush_guest_tlbs();
-
 return t2;
 }
 
@@ -221,6 +219,9 @@ unsigned int flush_area_local(const void *va, unsigned int 
flags)
 do_tlb_flush();
 }
 
+if ( flags & FLUSH_GUESTS_TLB )
+hvm_flush_guest_tlbs();
+
 if ( flags & FLUSH_CACHE )
 {
 const struct cpuinfo_x86 *c = _cpu_data;
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ee90e55b41..2a85e10112 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -363,7 +363,7 @@ static int oos_remove_write_access(struct vcpu *v, mfn_t 
gmfn,
 }
 
 if ( ftlb )
-flush_tlb_mask(d->dirty_cpumask);
+flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB);
 
 return 0;
 }
@@ -939,7 +939,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int 
pages)
 /* See if that freed up enough space */
 if ( d->arch.paging.shadow.free_pages >= pages )
 {
-flush_tlb_mask(d->dirty_cpumask);
+flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB);
 return;
 }
 }
@@ -993,7 +993,7 @@ static void shadow_blow_tables(struct domain *d)
pagetable_get_mfn(v->arch.shadow_table[i]), 0);
 
 /* Make sure everyone sees the unshadowings */
-flush_tlb_mask(d->dirty_cpumask);
+flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB);
 }
 
 void shadow_blow_tables_per_domain(struct domain *d)
@@ -1102,7 +1102,7 @@ mfn_t shadow_alloc(struct domain *d,
 if ( unlikely(!cpumask_empty()) )
 {
 perfc_incr(shadow_alloc_tlbflush);
-flush_tlb_mask();
+flush_mask(, FLUSH_TLB | FLUSH_GUESTS_TLB);
 }
 /* Now safe to clear the page for reuse */
 clear_domain_page(page_to_mfn(sp));
@@ -2290,7 +2290,7 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, int 
fast, int all)
 
 /* Need to flush TLBs now, so that linear maps are safe next time we
  * take a fault. */
-flush_tlb_mask(d->dirty_cpumask);
+flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB);
 
 paging_unlock(d);
 }
@@ -3005,7 +3005,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, 
unsigned long gfn,
 {
 sh_remove_all_shadows_and_parents(d, mfn);
 if ( sh_remove_all_mappings(d, mfn, _gfn(gfn)) )
-flush_tlb_mask(d->dirty_cpumask);
+flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB);
 }
 }
 
@@ -3045,7 +3045,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, 
unsigned long gfn,
 }
 omfn = mfn_add(omfn, 1);
 }
-flush_tlb_mask();
+flush_mask(, FLUSH_TLB | FLUSH_GUESTS_TLB);
 
 if ( npte )
 unmap_domain_page(npte);
@@ -3332,7 +3332,7 @@ int shadow_track_dirty_vram(struct domain *d,
 }
 }
 if ( flush_tlb )
-flush_tlb_mask(d->dirty_cpumask);
+flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB);
 goto out;
 
 out_sl1ma:
@@ -3402,7 +3402,7 @@ bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, 
struct vcpu *v),
 }
 
 /* Flush TLBs on all CPUs with dirty vcpu state. */
-flush_tlb_mask(mask);
+flush_mask(mask, FLUSH_TLB | FLUSH_GUESTS_TLB);
 
 /* Done. */
 for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index a219266fa2..64077d181b 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -590,7 +590,7 @@ static 

[Xen-devel] [PATCH v4 1/7] x86/hvm: allow ASID flush when v != current

2020-02-10 Thread Roger Pau Monne
Current implementation of hvm_asid_flush_vcpu is not safe to use
unless the target vCPU is either paused or the currently running one,
as it modifies the generation without any locking.

Fix this by using atomic operations when accessing the generation
field, both in hvm_asid_flush_vcpu_asid and other ASID functions. This
allows to safely flush the current ASID generation. Note that for the
flush to take effect if the vCPU is currently running a vmexit is
required.

Note the same could be achieved by introducing an extra field to
hvm_vcpu_asid that signals hvm_asid_handle_vmenter the need to call
hvm_asid_flush_vcpu on the given vCPU before vmentry, this however
seems unnecessary as hvm_asid_flush_vcpu itself only sets two vCPU
fields to 0, so there's no need to delay this to the vmentry ASID
helper.

This is not a bugfix as no callers that would violate the assumptions
listed in the first paragraph have been found, but a preparatory
change in order to allow remote flushing of HVM vCPUs.

Signed-off-by: Roger Pau Monné 
Reviewed-by: Wei Liu 
---
 xen/arch/x86/hvm/asid.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c
index 8e00a28443..63ce462d56 100644
--- a/xen/arch/x86/hvm/asid.c
+++ b/xen/arch/x86/hvm/asid.c
@@ -83,7 +83,7 @@ void hvm_asid_init(int nasids)
 
 void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid)
 {
-asid->generation = 0;
+write_atomic(>generation, 0);
 }
 
 void hvm_asid_flush_vcpu(struct vcpu *v)
@@ -121,7 +121,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid)
 goto disabled;
 
 /* Test if VCPU has valid ASID. */
-if ( asid->generation == data->core_asid_generation )
+if ( read_atomic(>generation) == data->core_asid_generation )
 return 0;
 
 /* If there are no free ASIDs, need to go to a new generation */
@@ -135,7 +135,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid)
 
 /* Now guaranteed to be a free ASID. */
 asid->asid = data->next_asid++;
-asid->generation = data->core_asid_generation;
+write_atomic(>generation, data->core_asid_generation);
 
 /*
  * When we assign ASID 1, flush all TLB entries as we are starting a new
-- 
2.25.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 7/7] x86/tlb: use Xen L0 assisted TLB flush when available

2020-02-10 Thread Roger Pau Monne
Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes.
This greatly increases the performance of TLB flushes when running
with a high amount of vCPUs as a Xen guest, and is specially important
when running in shim mode.

The following figures are from a PV guest running `make -j32 xen` in
shim mode with 32 vCPUs and HAP.

Using x2APIC and ALLBUT shorthand:
real4m35.973s
user4m35.110s
sys 36m24.117s

Using L0 assisted flush:
real1m2.596s
user4m34.818s
sys 5m16.374s

The implementation adds a new hook to hypervisor_ops so other
enlightenments can also implement such assisted flush just by filling
the hook. Note that the Xen implementation completely ignores the
dirty CPU mask and the linear address passed in, and always performs a
global TLB flush on all vCPUs.

Signed-off-by: Roger Pau Monné 
---
Changes since v3:
 - Use an alternative call for the flush hook.

Changes since v1:
 - Add a L0 assisted hook to hypervisor ops.
---
 xen/arch/x86/guest/hypervisor.c| 10 ++
 xen/arch/x86/guest/xen/xen.c   |  6 ++
 xen/arch/x86/smp.c | 11 +++
 xen/include/asm-x86/guest/hypervisor.h | 17 +
 4 files changed, 44 insertions(+)

diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c
index 647cdb1367..47e938e287 100644
--- a/xen/arch/x86/guest/hypervisor.c
+++ b/xen/arch/x86/guest/hypervisor.c
@@ -18,6 +18,7 @@
  *
  * Copyright (c) 2019 Microsoft.
  */
+#include 
 #include 
 #include 
 
@@ -73,6 +74,15 @@ void __init hypervisor_e820_fixup(struct e820map *e820)
 ops.e820_fixup(e820);
 }
 
+int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
+ unsigned int order)
+{
+if ( ops.flush_tlb )
+return alternative_call(ops.flush_tlb, mask, va, order);
+
+return -ENOSYS;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c
index f151b07548..5d3427a713 100644
--- a/xen/arch/x86/guest/xen/xen.c
+++ b/xen/arch/x86/guest/xen/xen.c
@@ -324,12 +324,18 @@ static void __init e820_fixup(struct e820map *e820)
 pv_shim_fixup_e820(e820);
 }
 
+static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int order)
+{
+return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL);
+}
+
 static const struct hypervisor_ops __initdata ops = {
 .name = "Xen",
 .setup = setup,
 .ap_setup = ap_setup,
 .resume = resume,
 .e820_fixup = e820_fixup,
+.flush_tlb = flush_tlb,
 };
 
 const struct hypervisor_ops *__init xg_probe(void)
diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index 65eb7cbda8..9bc925616a 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -256,6 +257,16 @@ void flush_area_mask(const cpumask_t *mask, const void 
*va, unsigned int flags)
 if ( (flags & ~FLUSH_ORDER_MASK) &&
  !cpumask_subset(mask, cpumask_of(cpu)) )
 {
+if ( cpu_has_hypervisor &&
+ !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID |
+ FLUSH_ORDER_MASK)) &&
+ !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) )
+{
+if ( tlb_clk_enabled )
+tlb_clk_enabled = false;
+return;
+}
+
 spin_lock(_lock);
 cpumask_and(_cpumask, mask, _online_map);
 cpumask_clear_cpu(cpu, _cpumask);
diff --git a/xen/include/asm-x86/guest/hypervisor.h 
b/xen/include/asm-x86/guest/hypervisor.h
index ade10e74ea..432e57c2a0 100644
--- a/xen/include/asm-x86/guest/hypervisor.h
+++ b/xen/include/asm-x86/guest/hypervisor.h
@@ -19,6 +19,8 @@
 #ifndef __X86_HYPERVISOR_H__
 #define __X86_HYPERVISOR_H__
 
+#include 
+
 #include 
 
 struct hypervisor_ops {
@@ -32,6 +34,8 @@ struct hypervisor_ops {
 void (*resume)(void);
 /* Fix up e820 map */
 void (*e820_fixup)(struct e820map *e820);
+/* L0 assisted TLB flush */
+int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int 
order);
 };
 
 #ifdef CONFIG_GUEST
@@ -41,6 +45,14 @@ void hypervisor_setup(void);
 int hypervisor_ap_setup(void);
 void hypervisor_resume(void);
 void hypervisor_e820_fixup(struct e820map *e820);
+/*
+ * L0 assisted TLB flush.
+ * mask: cpumask of the dirty vCPUs that should be flushed.
+ * va: linear address to flush, or NULL for global flushes.
+ * order: order of the linear address pointed by va.
+ */
+int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
+ unsigned int order);
 
 #else
 
@@ -52,6 +64,11 @@ static inline void hypervisor_setup(void) { 
ASSERT_UNREACHABLE(); }
 static inline int hypervisor_ap_setup(void) { return 0; }
 static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); }
 static inline void hypervisor_e820_fixup(struct e820map *e820) {}
+static inline int hypervisor_flush_tlb(const 

[Xen-devel] [PATCH v4 2/7] x86/paging: add TLB flush hooks

2020-02-10 Thread Roger Pau Monne
Add shadow and hap implementation specific helpers to perform guest
TLB flushes. Note that the code for both is exactly the same at the
moment, and is copied from hvm_flush_vcpu_tlb. This will be changed by
further patches that will add implementation specific optimizations to
them.

No functional change intended.

Signed-off-by: Roger Pau Monné 
Reviewed-by: Wei Liu 
---
Changes since v3:
 - Fix stray newline removal.
 - Fix return of shadow_flush_tlb dummy function.
---
 xen/arch/x86/hvm/hvm.c  | 51 ++
 xen/arch/x86/mm/hap/hap.c   | 54 
 xen/arch/x86/mm/shadow/common.c | 55 +
 xen/include/asm-x86/hap.h   |  3 ++
 xen/include/asm-x86/shadow.h| 12 +++
 5 files changed, 127 insertions(+), 48 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 00a9e70b7c..4049f57232 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3990,55 +3990,10 @@ static void hvm_s3_resume(struct domain *d)
 bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
 void *ctxt)
 {
-static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
-cpumask_t *mask = _cpu(flush_cpumask);
-struct domain *d = current->domain;
-struct vcpu *v;
-
-/* Avoid deadlock if more than one vcpu tries this at the same time. */
-if ( !spin_trylock(>hypercall_deadlock_mutex) )
-return false;
-
-/* Pause all other vcpus. */
-for_each_vcpu ( d, v )
-if ( v != current && flush_vcpu(ctxt, v) )
-vcpu_pause_nosync(v);
-
-/* Now that all VCPUs are signalled to deschedule, we wait... */
-for_each_vcpu ( d, v )
-if ( v != current && flush_vcpu(ctxt, v) )
-while ( !vcpu_runnable(v) && v->is_running )
-cpu_relax();
-
-/* All other vcpus are paused, safe to unlock now. */
-spin_unlock(>hypercall_deadlock_mutex);
-
-cpumask_clear(mask);
-
-/* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */
-for_each_vcpu ( d, v )
-{
-unsigned int cpu;
-
-if ( !flush_vcpu(ctxt, v) )
-continue;
-
-paging_update_cr3(v, false);
+struct domain *currd = current->domain;
 
-cpu = read_atomic(>dirty_cpu);
-if ( is_vcpu_dirty_cpu(cpu) )
-__cpumask_set_cpu(cpu, mask);
-}
-
-/* Flush TLBs on all CPUs with dirty vcpu state. */
-flush_tlb_mask(mask);
-
-/* Done. */
-for_each_vcpu ( d, v )
-if ( v != current && flush_vcpu(ctxt, v) )
-vcpu_unpause(v);
-
-return true;
+return shadow_mode_enabled(currd) ? shadow_flush_tlb(flush_vcpu, ctxt)
+  : hap_flush_tlb(flush_vcpu, ctxt);
 }
 
 static bool always_flush(void *ctxt, struct vcpu *v)
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d93f3451c..6894c1aa38 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -669,6 +669,60 @@ static void hap_update_cr3(struct vcpu *v, int do_locking, 
bool noflush)
 hvm_update_guest_cr3(v, noflush);
 }
 
+bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
+   void *ctxt)
+{
+static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
+cpumask_t *mask = _cpu(flush_cpumask);
+struct domain *d = current->domain;
+struct vcpu *v;
+
+/* Avoid deadlock if more than one vcpu tries this at the same time. */
+if ( !spin_trylock(>hypercall_deadlock_mutex) )
+return false;
+
+/* Pause all other vcpus. */
+for_each_vcpu ( d, v )
+if ( v != current && flush_vcpu(ctxt, v) )
+vcpu_pause_nosync(v);
+
+/* Now that all VCPUs are signalled to deschedule, we wait... */
+for_each_vcpu ( d, v )
+if ( v != current && flush_vcpu(ctxt, v) )
+while ( !vcpu_runnable(v) && v->is_running )
+cpu_relax();
+
+/* All other vcpus are paused, safe to unlock now. */
+spin_unlock(>hypercall_deadlock_mutex);
+
+cpumask_clear(mask);
+
+/* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */
+for_each_vcpu ( d, v )
+{
+unsigned int cpu;
+
+if ( !flush_vcpu(ctxt, v) )
+continue;
+
+paging_update_cr3(v, false);
+
+cpu = read_atomic(>dirty_cpu);
+if ( is_vcpu_dirty_cpu(cpu) )
+__cpumask_set_cpu(cpu, mask);
+}
+
+/* Flush TLBs on all CPUs with dirty vcpu state. */
+flush_tlb_mask(mask);
+
+/* Done. */
+for_each_vcpu ( d, v )
+if ( v != current && flush_vcpu(ctxt, v) )
+vcpu_unpause(v);
+
+return true;
+}
+
 const struct paging_mode *
 hap_paging_get_mode(struct vcpu *v)
 {
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 6212ec2c4a..ee90e55b41 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ 

[Xen-devel] [PATCH v4 3/7] x86/hap: improve hypervisor assisted guest TLB flush

2020-02-10 Thread Roger Pau Monne
The current implementation of the hypervisor assisted flush for HAP is
extremely inefficient.

First of all there's no need to call paging_update_cr3, as the only
relevant part of that function when doing a flush is the ASID vCPU
flush, so just call that function directly.

Since hvm_asid_flush_vcpu is protected against concurrent callers by
using atomic operations there's no need anymore to pause the affected
vCPUs.

Finally the global TLB flush performed by flush_tlb_mask is also not
necessary, since we only want to flush the guest TLB state it's enough
to trigger a vmexit on the pCPUs currently holding any vCPU state, as
such vmexit will already perform an ASID/VPID update, and thus clear
the guest TLB.

Signed-off-by: Roger Pau Monné 
Reviewed-by: Wei Liu 
---
Changes since v3:
 - s/do_flush/handle_flush/.
 - Add comment about handle_flush usage.
 - Fix VPID typo in comment.
---
 xen/arch/x86/mm/hap/hap.c | 52 +++
 1 file changed, 25 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 6894c1aa38..dbb61bf9c6 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -669,32 +669,28 @@ static void hap_update_cr3(struct vcpu *v, int 
do_locking, bool noflush)
 hvm_update_guest_cr3(v, noflush);
 }
 
+/*
+ * NB: doesn't actually perform any flush, used just to clear the CPU from the
+ * mask and hence signal that the guest TLB flush has been done.
+ */
+static void handle_flush(void *data)
+{
+cpumask_t *mask = data;
+unsigned int cpu = smp_processor_id();
+
+ASSERT(cpumask_test_cpu(cpu, mask));
+cpumask_clear_cpu(cpu, mask);
+}
+
 bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
void *ctxt)
 {
 static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
 cpumask_t *mask = _cpu(flush_cpumask);
 struct domain *d = current->domain;
+unsigned int this_cpu = smp_processor_id();
 struct vcpu *v;
 
-/* Avoid deadlock if more than one vcpu tries this at the same time. */
-if ( !spin_trylock(>hypercall_deadlock_mutex) )
-return false;
-
-/* Pause all other vcpus. */
-for_each_vcpu ( d, v )
-if ( v != current && flush_vcpu(ctxt, v) )
-vcpu_pause_nosync(v);
-
-/* Now that all VCPUs are signalled to deschedule, we wait... */
-for_each_vcpu ( d, v )
-if ( v != current && flush_vcpu(ctxt, v) )
-while ( !vcpu_runnable(v) && v->is_running )
-cpu_relax();
-
-/* All other vcpus are paused, safe to unlock now. */
-spin_unlock(>hypercall_deadlock_mutex);
-
 cpumask_clear(mask);
 
 /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */
@@ -705,20 +701,22 @@ bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct 
vcpu *v),
 if ( !flush_vcpu(ctxt, v) )
 continue;
 
-paging_update_cr3(v, false);
+hvm_asid_flush_vcpu(v);
 
 cpu = read_atomic(>dirty_cpu);
-if ( is_vcpu_dirty_cpu(cpu) )
+if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) )
 __cpumask_set_cpu(cpu, mask);
 }
 
-/* Flush TLBs on all CPUs with dirty vcpu state. */
-flush_tlb_mask(mask);
-
-/* Done. */
-for_each_vcpu ( d, v )
-if ( v != current && flush_vcpu(ctxt, v) )
-vcpu_unpause(v);
+/*
+ * Trigger a vmexit on all pCPUs with dirty vCPU state in order to force an
+ * ASID/VPID change and hence accomplish a guest TLB flush. Note that vCPUs
+ * not currently running will already be flushed when scheduled because of
+ * the ASID tickle done in the loop above.
+ */
+on_selected_cpus(mask, handle_flush, mask, 0);
+while ( !cpumask_empty(mask) )
+cpu_relax();
 
 return true;
 }
-- 
2.25.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 0/7] x86: improve assisted tlb flush and use it in guest mode

2020-02-10 Thread Roger Pau Monne
Hello,

The following series aims to improve the TLB flush times when running
nested Xen, and it's specially beneficial when running in shim mode.

Only the HAP guest TLB flush is improved, the shadow paging TLB flush is
left as-is, and can be improved later if there's interest.

For a reference on the performance improvement see patch #7, as it's a
huge increase which can benefit other guests using assisted TLB flushes,
and also the ones using the viridian TLB flush assist (ie: Windows).

Thanks, Roger.

Roger Pau Monne (7):
  x86/hvm: allow ASID flush when v != current
  x86/paging: add TLB flush hooks
  x86/hap: improve hypervisor assisted guest TLB flush
  x86/tlb: introduce a flush guests TLB flag
  x86/tlb: allow disabling the TLB clock
  xen/guest: prepare hypervisor ops to use alternative calls
  x86/tlb: use Xen L0 assisted TLB flush when available

 xen/arch/x86/flushtlb.c| 24 ++---
 xen/arch/x86/guest/hyperv/hyperv.c |  2 +-
 xen/arch/x86/guest/hypervisor.c| 51 ++
 xen/arch/x86/guest/xen/xen.c   |  8 ++-
 xen/arch/x86/hvm/asid.c|  6 +--
 xen/arch/x86/hvm/hvm.c | 51 ++
 xen/arch/x86/mm/hap/hap.c  | 52 +++
 xen/arch/x86/mm/shadow/common.c| 71 +++---
 xen/arch/x86/mm/shadow/hvm.c   |  2 +-
 xen/arch/x86/mm/shadow/multi.c | 16 +++---
 xen/arch/x86/smp.c | 11 
 xen/include/asm-x86/flushtlb.h | 19 ++-
 xen/include/asm-x86/guest/hypervisor.h | 17 ++
 xen/include/asm-x86/hap.h  |  3 ++
 xen/include/asm-x86/shadow.h   | 12 +
 15 files changed, 246 insertions(+), 99 deletions(-)

-- 
2.25.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 5/7] x86/tlb: allow disabling the TLB clock

2020-02-10 Thread Roger Pau Monne
The TLB clock is helpful when running Xen on bare metal because when
doing a TLB flush each CPU is IPI'ed and can keep a timestamp of the
last flush.

This is not the case however when Xen is running virtualized, and the
underlying hypervisor provides mechanism to assist in performing TLB
flushes: Xen itself for example offers a HVMOP_flush_tlbs hypercall in
order to perform a TLB flush without having to IPI each CPU. When
using such mechanisms it's no longer possible to keep a timestamp of
the flushes on each CPU, as they are performed by the underlying
hypervisor.

Offer a boolean in order to signal Xen that the timestamped TLB
shouldn't be used. This avoids keeping the timestamps of the flushes,
and also forces NEED_FLUSH to always return true.

No functional change intended, as this change doesn't introduce any
user that disables the timestamped TLB.

Signed-off-by: Roger Pau Monné 
---
 xen/arch/x86/flushtlb.c| 19 +--
 xen/include/asm-x86/flushtlb.h | 17 -
 2 files changed, 29 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c
index e7ccd4ec7b..3649900793 100644
--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -32,6 +32,9 @@
 u32 tlbflush_clock = 1U;
 DEFINE_PER_CPU(u32, tlbflush_time);
 
+/* Signals whether the TLB flush clock is in use. */
+bool __read_mostly tlb_clk_enabled = true;
+
 /*
  * pre_flush(): Increment the virtual TLB-flush clock. Returns new clock value.
  * 
@@ -82,12 +85,13 @@ static void post_flush(u32 t)
 static void do_tlb_flush(void)
 {
 unsigned long flags, cr4;
-u32 t;
+u32 t = 0;
 
 /* This non-reentrant function is sometimes called in interrupt context. */
 local_irq_save(flags);
 
-t = pre_flush();
+if ( tlb_clk_enabled )
+t = pre_flush();
 
 if ( use_invpcid )
 invpcid_flush_all();
@@ -99,7 +103,8 @@ static void do_tlb_flush(void)
 else
 write_cr3(read_cr3());
 
-post_flush(t);
+if ( tlb_clk_enabled )
+post_flush(t);
 
 local_irq_restore(flags);
 }
@@ -107,7 +112,7 @@ static void do_tlb_flush(void)
 void switch_cr3_cr4(unsigned long cr3, unsigned long cr4)
 {
 unsigned long flags, old_cr4;
-u32 t;
+u32 t = 0;
 
 /* Throughout this function we make this assumption: */
 ASSERT(!(cr4 & X86_CR4_PCIDE) || !(cr4 & X86_CR4_PGE));
@@ -115,7 +120,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4)
 /* This non-reentrant function is sometimes called in interrupt context. */
 local_irq_save(flags);
 
-t = pre_flush();
+if ( tlb_clk_enabled )
+t = pre_flush();
 
 old_cr4 = read_cr4();
 ASSERT(!(old_cr4 & X86_CR4_PCIDE) || !(old_cr4 & X86_CR4_PGE));
@@ -167,7 +173,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4)
 if ( cr4 & X86_CR4_PCIDE )
 invpcid_flush_all_nonglobals();
 
-post_flush(t);
+if ( tlb_clk_enabled )
+post_flush(t);
 
 local_irq_restore(flags);
 }
diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h
index 07f9bc6103..9773014320 100644
--- a/xen/include/asm-x86/flushtlb.h
+++ b/xen/include/asm-x86/flushtlb.h
@@ -21,10 +21,21 @@ extern u32 tlbflush_clock;
 /* Time at which each CPU's TLB was last flushed. */
 DECLARE_PER_CPU(u32, tlbflush_time);
 
-#define tlbflush_current_time() tlbflush_clock
+/* TLB clock is in use. */
+extern bool tlb_clk_enabled;
+
+static inline uint32_t tlbflush_current_time(void)
+{
+/* Returning 0 from tlbflush_current_time will always force a flush. */
+return tlb_clk_enabled ? tlbflush_clock : 0;
+}
 
 static inline void page_set_tlbflush_timestamp(struct page_info *page)
 {
+/* Avoid the write if the TLB clock is disabled. */
+if ( !tlb_clk_enabled )
+return;
+
 /*
  * Prevent storing a stale time stamp, which could happen if an update
  * to tlbflush_clock plus a subsequent flush IPI happen between the
@@ -67,6 +78,10 @@ static inline void tlbflush_filter(cpumask_t *mask, uint32_t 
page_timestamp)
 {
 unsigned int cpu;
 
+/* Short-circuit: there's no need to iterate if the clock is disabled. */
+if ( !tlb_clk_enabled )
+return;
+
 for_each_cpu ( cpu, mask )
 if ( !NEED_FLUSH(per_cpu(tlbflush_time, cpu), page_timestamp) )
 __cpumask_clear_cpu(cpu, mask);
-- 
2.25.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] x86/svm: Reduce vmentry latency

2020-02-10 Thread Jan Beulich
On 10.02.2020 13:09, Roger Pau Monné wrote:
> On Mon, Feb 10, 2020 at 11:42:06AM +, Andrew Cooper wrote:
>> Writing to the stack pointer in the middle of a line of pop operations is
>> specifically recommended against by the optimisation guide, and is a 
>> technique
>> used by Speculative Load Hardening to combat SpectreRSB.
>>
>> In practice, it causes all further stack-relative accesses to block until the
>> write to the stack pointer retires, so the stack engine can get back in sync.
>>
>> Pop into any dead register to discard %rax's value without clobbering the
>> stack engine.  Smaller compiled code, and runs faster.
>>
>> Signed-off-by: Andrew Cooper 
> 
> Reviewed-by: Roger Pau Monné 

Acked-by: Jan Beulich 


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [OSSTEST PATCH] ts-xen-build-prep: Install python3-dev

2020-02-10 Thread Ian Jackson
Anthony PERARD writes ("[OSSTEST PATCH] ts-xen-build-prep: Install 
python3-dev"):
> Allow to build Xen with python3.
> 
> Also, QEMU upstream (to be 4.3) now requires python >= 3.5, but that
> affect only xen-unstable.
> 
> Signed-off-by: Anthony PERARD 

Acked-by: Ian Jackson 

And pushed to pretest.

Thanks,
Ian.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-5.4 test] 146823: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146823 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146823/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvopsbroken
 build-armhf  broken
 build-arm64  broken
 build-arm64-xsm  broken
 build-i386-pvops broken
 build-armhf-pvopsbroken
 build-amd64-pvopsbroken
 build-i386   broken
 build-amd64  broken
 build-i386-xsm   broken
 build-amd64-xsm  broken
 build-i386-pvops  4 host-install(4)broken REGR. vs. 146121
 build-arm64   4 host-install(4)broken REGR. vs. 146121
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 146121
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 146121
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 146121
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 146121
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 146121
 build-i386-xsm4 host-install(4)broken REGR. vs. 146121
 build-i3864 host-install(4)broken REGR. vs. 146121
 build-amd64   4 host-install(4)broken REGR. vs. 146121
 build-armhf   4 host-install(4)broken REGR. vs. 146121

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 

Re: [Xen-devel] [PATCH] x86/p2m: drop p2m_access_t parameter from set_mmio_p2m_entry()

2020-02-10 Thread Jan Beulich
On 07.02.2020 18:21, Tamas K Lengyel wrote:
> On Fri, Feb 7, 2020 at 10:16 AM Tamas K Lengyel  wrote:
>>
>> On Fri, Feb 7, 2020 at 9:54 AM Jan Beulich  wrote:
>>>
>>> On 07.02.2020 10:52, Roger Pau Monné wrote:
 On Fri, Feb 07, 2020 at 09:08:15AM +0100, Jan Beulich wrote:
> On 06.02.2020 16:20, Jan Beulich wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -3037,9 +3037,8 @@ static int vmx_alloc_vlapic_mapping(stru
>>  share_xen_page_with_guest(pg, d, SHARE_rw);
>>  d->arch.hvm.vmx.apic_access_mfn = mfn;
>>
>> -return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), 
>> mfn,
>> -  PAGE_ORDER_4K,
>> -  p2m_get_hostp2m(d)->default_access);
>> +return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), 
>> mfn,
>> +  PAGE_ORDER_4K);
>>  }
>
> Upon 2nd thought - does this really want to use default access?
> Execute permission for this page looks a little suspicious.
> Isn't it the case that this page doesn't (normally?) get
> accessed at all, and instead its address serves as an indicator
> to the CPU? (I even vaguely recall it having been considered to
> consolidate this, to e.g. a single page per domain.) In which
> case even p2m_access_n might be good enough?

 Hm, I think p2m_access_n is not enough, as that would trigger an EPT
 fault which has preference over the APIC access VM exit (see 29.4.1
 Priority of APIC-Access VM Exits).
>>>
>>> Ah, yes, reading that text I agree. Having just a single such page
>>> per domain would still seem possible, though. Although, if we meant
>>> to support a guest moving the APIC base address, then we couldn't,
>>> again.
>>>
 I think setting it to p2m_access_rw should be enough, and we would get
 EPT faults when trying to execute from APIC page.
>>>
>>> Then the other question is whether there's any use for introspection
>>> to further limit permissions on this (kind of fake) page. Tamas?
>>
>> I'm not aware of any use-case that would restrict the EPT permission
>> for MMIO pages. That said, an application could still request that to
>> be set later on. Since this function here gets called in
>> vmx_domain_initialise I suspect a mem_access user didn't even have a
>> chance to change the default_access to anything custom so I venture it
>> would be safe to choose whatever permission is sensible. If anyone
>> wants to mess with the permission later they can do that regardless of
>> what was set here.
> 
> One thing to add though regarding using p2m_access_rw here is that in
> case something would trigger an X violation it would lead to an event
> being sent to a vm_event subscriber, which they may not be able to
> make sense of.

Hmm, good point.

> So I would suggest that if you want to make this
> pagetable entry R/W only to use a p2m_type for that instead of a
> p2m_access.

This would then take a further type derived from p2m_mmio_direct,
with its driving moved to the hypercall interface (as Xen can't
tell which one is which, e.g. ROM vs "ordinary" MMIO, _except_ in
this special case here). So I guess the patch as is looks to be
the better alternative overall.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] xen/sched: remove pointless ASSERT() in credit2

2020-02-10 Thread Juergen Gross
The ASSERT() at the top of csched2_context_saved() is completely
pointless, as the BUG_ON() just in front of it catches the same problem
already.

While at it remove a bogus space in the BUG_ON().

Signed-off-by: Juergen Gross 
---
 xen/common/sched/credit2.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index b965cd1c7b..78467b772c 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -2167,10 +2167,8 @@ csched2_context_saved(const struct scheduler *ops, 
struct sched_unit *unit)
 s_time_t now = NOW();
 LIST_HEAD(were_parked);
 
-BUG_ON( !is_idle_unit(unit) &&
-svc->rqd != c2rqd(ops, sched_unit_master(unit)));
-ASSERT(is_idle_unit(unit) ||
-   svc->rqd == c2rqd(ops, sched_unit_master(unit)));
+BUG_ON(!is_idle_unit(unit) &&
+   svc->rqd != c2rqd(ops, sched_unit_master(unit)));
 
 /* This unit is now eligible to be put on the runqueue again */
 __clear_bit(__CSFLAG_scheduled, >flags);
-- 
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 4/4] AMD/IOMMU: Treat head/tail pointers as byte offsets

2020-02-10 Thread Jan Beulich
On 10.02.2020 16:19, Andrew Cooper wrote:
> On 10/02/2020 15:02, Jan Beulich wrote:
>> On 10.02.2020 15:55, Andrew Cooper wrote:
>>> On 05/02/2020 10:36, Jan Beulich wrote:
 On 03.02.2020 15:43, Andrew Cooper wrote:
> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> @@ -24,16 +24,14 @@ static int queue_iommu_command(struct amd_iommu 
> *iommu, u32 cmd[])
>  {
>  uint32_t tail, head;
>  
> -tail = iommu->cmd_buffer.tail;
> -if ( ++tail == iommu->cmd_buffer.entries )
> +tail = iommu->cmd_buffer.tail + IOMMU_CMD_BUFFER_ENTRY_SIZE;
> +if ( tail == iommu->cmd_buffer.size )
>  tail = 0;
>  
> -head = iommu_get_rb_pointer(readl(iommu->mmio_base +
> -  IOMMU_CMD_BUFFER_HEAD_OFFSET));
> +head = readl(iommu->mmio_base + IOMMU_CMD_BUFFER_HEAD_OFFSET);
>  if ( head != tail )
 Surely you want to mask off reserved (or more generally
 unrelated) bits, before consuming the value for the purpose
 here (and elsewhere below)?
>>> Reserved bits are defined in the IOMMU spec to be read-only zero.
>>>
>>> It is also undefined behaviour for this value to ever be outside of the
>>> size configured for command buffer, so using the value like this is spec
>>> compliant.
>>>
>>> As for actually masking the values, that breaks the optimisers ability
>>> to construct commands in the command ring.  This aspect can be worked
>>> around with other code changes, but I also think it is implausible that
>>> the remaining reserved bits here are going to sprout incompatible future
>>> uses.
>> Implausible - perhaps. But impossible - no. There could be a simple
>> flag bit appearing somewhere in these registers. I simply don't it
>> is a good idea to set a precedent of not honoring reserved bit as
>> being reserved. The spec saying "read-only zero" can only be viewed
>> as correct for the current version of the spec,
> 
> Its perfectly easy to do forward compatible changes with a spec written
> in this way.
> 
> It means that new behaviours have to be opted in to, and this is how the
> AMD IOMMU spec has evolved.  Notice how every new feature declares "this
> bit is reserved unless $OTHER_THING is enabled."
> 
> It is also a very sane way of doing forward compatibility, from
> software's point of view.

Yes. But does the IOMMU spec spell out that it'll follow this
in the future?

>> or else why would
>> the bits be called "reserved" rather than e.g. "read-as-zero"?
> 
> Read Table 1, but it also ought to be obvious.  "Reserved", "Resv" and
> "Res" are all shorter to write than "read-as-zero", especially in the
> diagrams of a few individual bits in a register.

There's also the common acronym "raz", which is as short. That table
in particular says nothing about future uses of currently reserved
bits. Just take the Extended Feature Register as a reference: How
would new features be advertised (in currently reserved bits) if use
of those bits was to be qualified by yet some other feature bit(s).
Past growth of the set of used bits also hasn't followed a pattern
you seem to suggest.

Don't get me wrong - I agree it's unlikely for these bits to gain
a meaning that would conflict with a more relaxed use like you do
suggest. But I don't think better code generation should be an
argument against having code written as compatibly as possible.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] tools/configure: generate stubs and long-double 32-bit headers if needed

2020-02-10 Thread Ian Jackson
Christopher Clark writes ("[PATCH] tools/configure: generate stubs and 
long-double 32-bit headers if needed"):
> The gnu/stubs-32.h and bits/long-double-32.h headers are required to
> build hvmloader but are not always available in 64-bit build
> environments. To avoid introducing a build requirement on 32-bit
> multilib generate versions of them from the 64-bit equivalent header.
> 
> This patch enables the removal of downstream patching that has been
> carried in the Yocto/OpenEmbedded meta-virtualization layer since 2012.

Thanks for this patch.  However, I'm quite doubtful about the
approach.

> +echo '#include ' >conf-stubs.c
> +SIXTY_FOUR_HDR=`$CC -M conf-stubs.c | grep '/stubs-64.h$'`
> +rm conf-stubs.c
> +mkdir -p include/gnu
> +cat "${SIXTY_FOUR_HDR}" | \
> +grep -v 'stub_bdflush\|stub_getmsg\|stub_putmsg' 
> >include/gnu/stubs-32.h
> +echo \#define __stub___kernel_cosl >> include/gnu/stubs-32.h
> +echo \#define __stub___kernel_sinl >> include/gnu/stubs-32.h
> +echo \#define __stub___kernel_tanl >> include/gnu/stubs-32.h

This code seems to be ad-hoc seddery which depends on the details of
the existing header file.  That seems like it is unprincipled and
fragile, to me.

I don't understand why you wouldn't just make a small package or
tarball or something containing the relevant headers and install
that ?

Also, don't you need a 32-bit libgcc too ?

Thanks,
Ian.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] pygrub: fix python3 cross-compile: install with INSTALL_PYTHON_PROG

2020-02-10 Thread Ian Jackson
Christopher Clark writes ("[PATCH 1/2] pygrub: fix python3 cross-compile: 
install with INSTALL_PYTHON_PROG"):
> Install pygrub with INSTALL_PYTHON_PROG, as per the other Xen python
> executables, to ensure that the hashbang path to the interpreter
> is written correctly in cross-compile builds, eg. with OpenEmbedded.

Hrm.  There is definitely a bug here and I think
tools/python/install-wrap needs to be called.

What I don't understand is...

> diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> index 3063c4998f..b4f6f10ddd 100644
> --- a/tools/pygrub/Makefile
> +++ b/tools/pygrub/Makefile
> @@ -18,6 +18,8 @@ install: all
>   CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) \
>   setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>--root="$(DESTDIR)" --install-scripts=$(LIBEXEC_BIN) --force
> + rm -f $(DESTDIR)/$(LIBEXEC_BIN)/pygrub
> + $(INSTALL_PYTHON_PROG) src/pygrub $(DESTDIR)/$(LIBEXEC_BIN)/pygrub
>   set -e; if [ $(bindir) != $(LIBEXEC_BIN) -a \
>"`readlink -f $(DESTDIR)/$(bindir)`" != \
>"`readlink -f $(LIBEXEC_BIN)`" ]; then \

... why this is the right approach in tools/pygrub when it is *not*
the approach used in tools/python, where install-wrap lives, and which
is the other directory which has a setup.py.

tools/python seems to use $(INSTALL_PROG) and not have anything in
`scripts' in setup.py.  Is that wrong, too ?

Perhaps instead of the rm, pygrub/setup.py should lose the line
  scripts = ["src/pygrub"],
?

Or is there maybe a way to get setup.py to use a different `install' ?

Thanks,
Ian.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] python, pygrub: pass DISTUTILS env vars as setup.py args

2020-02-10 Thread Ian Jackson
Christopher Clark writes ("[PATCH 2/2] python, pygrub: pass DISTUTILS env vars 
as setup.py args"):
> Allow to respect the target install dir (PYTHON_SITEPACKAGES_DIR)
> as well as other parameters set by the OpenEmbedded build system.
> This is especially useful when the target libdir is not the default one
> (/usr/lib), but for example /usr/lib64.
> 
> Signed-off-by: Maciej Pijanowski 
> 
> This enables the distro build system to pass additional args to the
> python setup.py build and install commands.
> Signed-off-by: Christopher Clark 

Thanks.  The overall idea here is very sound and I would like to take
this patch in some form.  But, and I hope this is not too annoying, I
have a couple of observations/questions:

Firstly, the commit message mentions PYTHON_SITEPACKAGES_DIR which is
a thing which does not appears in this commit.  AIUI the OpenEmbedded
build system honours that and implements it by setting
DISTUTILS_BUILD_ARGS and DISTUTILS_INSTALL_ARGS.  I think this needs
to be explained correctly in the commit message.

Secondly, it bothers me that the env var name does not mention python
at all.  Would it be OK to change it to PYDISTUTILS_... or something ?

Thanks,
Ian.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] MAINTAINERS: cc community manager on patches to CHANGELOG.md

2020-02-10 Thread Ian Jackson
Julien Grall writes ("Re: [PATCH] MAINTAINERS: cc community manager on patches 
to CHANGELOG.md"):
> Hi,
> 
> On 06/02/2020 16:52, Wei Liu wrote:
> > On Thu, Feb 06, 2020 at 04:48:10PM +, Paul Durrant wrote:
> >> The purpose of the change-log is to be a human-readable list of notable
> >> changes and, as such, additions to it are likely of interest to the
> >> community manager in preparing blog entries, release summaries, etc.
> >>
> >> Signed-off-by: Paul Durrant 
> > 
> > Acked-by: Wei Liu 
> 
> Acked-by: Julien Grall 
> 
> I think we may need an ack from whoever receive community.manager@. 
> George, Ian?

Acked-by: Ian Jackson 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] xen/sched: remove sched_init_pdata()

2020-02-10 Thread Juergen Gross
sched_init_pdata() is used nowhere, it can be removed. Same applies to
the .init_pdata hook of the per-scheduler interface.

Signed-off-by: Juergen Gross 
---
 xen/common/sched/credit.c  | 12 
 xen/common/sched/credit2.c | 21 -
 xen/common/sched/null.c| 10 --
 xen/common/sched/private.h |  8 
 xen/common/sched/rt.c  | 31 ---
 5 files changed, 82 deletions(-)

diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 05946eea6e..93d89da278 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -614,17 +614,6 @@ init_pdata(struct csched_private *prv, struct csched_pcpu 
*spc, int cpu)
 spc->nr_runnable = 0;
 }
 
-static void
-csched_init_pdata(const struct scheduler *ops, void *pdata, int cpu)
-{
-unsigned long flags;
-struct csched_private *prv = CSCHED_PRIV(ops);
-
-spin_lock_irqsave(>lock, flags);
-init_pdata(prv, pdata, cpu);
-spin_unlock_irqrestore(>lock, flags);
-}
-
 /* Change the scheduler of cpu to us (Credit). */
 static spinlock_t *
 csched_switch_sched(struct scheduler *new_ops, unsigned int cpu,
@@ -2273,7 +2262,6 @@ static const struct scheduler sched_credit_def = {
 .alloc_udata= csched_alloc_udata,
 .free_udata = csched_free_udata,
 .alloc_pdata= csched_alloc_pdata,
-.init_pdata = csched_init_pdata,
 .deinit_pdata   = csched_deinit_pdata,
 .free_pdata = csched_free_pdata,
 .switch_sched   = csched_switch_sched,
diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 231f87d960..b965cd1c7b 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -3818,26 +3818,6 @@ init_pdata(struct csched2_private *prv, struct 
csched2_pcpu *spc,
 return spc->runq_id;
 }
 
-static void
-csched2_init_pdata(const struct scheduler *ops, void *pdata, int cpu)
-{
-struct csched2_private *prv = csched2_priv(ops);
-spinlock_t *old_lock;
-unsigned long flags;
-unsigned rqi;
-
-write_lock_irqsave(>lock, flags);
-old_lock = pcpu_schedule_lock(cpu);
-
-rqi = init_pdata(prv, pdata, cpu);
-/* Move the scheduler lock to the new runq lock. */
-get_sched_res(cpu)->schedule_lock = >rqd[rqi].lock;
-
-/* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */
-spin_unlock(old_lock);
-write_unlock_irqrestore(>lock, flags);
-}
-
 /* Change the scheduler of cpu to us (Credit2). */
 static spinlock_t *
 csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
@@ -4085,7 +4065,6 @@ static const struct scheduler sched_credit2_def = {
 .alloc_udata= csched2_alloc_udata,
 .free_udata = csched2_free_udata,
 .alloc_pdata= csched2_alloc_pdata,
-.init_pdata = csched2_init_pdata,
 .deinit_pdata   = csched2_deinit_pdata,
 .free_pdata = csched2_free_pdata,
 .switch_sched   = csched2_switch_sched,
diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c
index 8c3101649d..82d5d1baab 100644
--- a/xen/common/sched/null.c
+++ b/xen/common/sched/null.c
@@ -166,15 +166,6 @@ static void init_pdata(struct null_private *prv, struct 
null_pcpu *npc,
 npc->unit = NULL;
 }
 
-static void null_init_pdata(const struct scheduler *ops, void *pdata, int cpu)
-{
-struct null_private *prv = null_priv(ops);
-
-ASSERT(pdata);
-
-init_pdata(prv, pdata, cpu);
-}
-
 static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 {
 struct null_private *prv = null_priv(ops);
@@ -1042,7 +1033,6 @@ static const struct scheduler sched_null_def = {
 .deinit = null_deinit,
 .alloc_pdata= null_alloc_pdata,
 .free_pdata = null_free_pdata,
-.init_pdata = null_init_pdata,
 .switch_sched   = null_switch_sched,
 .deinit_pdata   = null_deinit_pdata,
 
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 2a94179baa..367811a12f 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -306,7 +306,6 @@ struct scheduler {
 struct sched_unit *, void *);
 void (*free_pdata) (const struct scheduler *, void *, int);
 void *   (*alloc_pdata)(const struct scheduler *, int);
-void (*init_pdata) (const struct scheduler *, void *, int);
 void (*deinit_pdata)   (const struct scheduler *, void *, int);
 
 /* Returns ERR_PTR(-err) for error, NULL for 'nothing needed'. */
@@ -408,13 +407,6 @@ static inline void sched_free_pdata(const struct scheduler 
*s, void *data,
 s->free_pdata(s, data, cpu);
 }
 
-static inline void sched_init_pdata(const struct scheduler *s, void *data,
-int cpu)
-{
-if ( s->init_pdata )
-s->init_pdata(s, data, cpu);
-}
-
 static inline void sched_deinit_pdata(const struct scheduler *s, void *data,
   int cpu)
 {
diff --git 

Re: [Xen-devel] [PATCH 4/4] AMD/IOMMU: Treat head/tail pointers as byte offsets

2020-02-10 Thread Andrew Cooper
On 10/02/2020 15:02, Jan Beulich wrote:
> On 10.02.2020 15:55, Andrew Cooper wrote:
>> On 05/02/2020 10:36, Jan Beulich wrote:
>>> On 03.02.2020 15:43, Andrew Cooper wrote:
 --- a/xen/drivers/passthrough/amd/iommu_cmd.c
 +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
 @@ -24,16 +24,14 @@ static int queue_iommu_command(struct amd_iommu 
 *iommu, u32 cmd[])
  {
  uint32_t tail, head;
  
 -tail = iommu->cmd_buffer.tail;
 -if ( ++tail == iommu->cmd_buffer.entries )
 +tail = iommu->cmd_buffer.tail + IOMMU_CMD_BUFFER_ENTRY_SIZE;
 +if ( tail == iommu->cmd_buffer.size )
  tail = 0;
  
 -head = iommu_get_rb_pointer(readl(iommu->mmio_base +
 -  IOMMU_CMD_BUFFER_HEAD_OFFSET));
 +head = readl(iommu->mmio_base + IOMMU_CMD_BUFFER_HEAD_OFFSET);
  if ( head != tail )
>>> Surely you want to mask off reserved (or more generally
>>> unrelated) bits, before consuming the value for the purpose
>>> here (and elsewhere below)?
>> Reserved bits are defined in the IOMMU spec to be read-only zero.
>>
>> It is also undefined behaviour for this value to ever be outside of the
>> size configured for command buffer, so using the value like this is spec
>> compliant.
>>
>> As for actually masking the values, that breaks the optimisers ability
>> to construct commands in the command ring.  This aspect can be worked
>> around with other code changes, but I also think it is implausible that
>> the remaining reserved bits here are going to sprout incompatible future
>> uses.
> Implausible - perhaps. But impossible - no. There could be a simple
> flag bit appearing somewhere in these registers. I simply don't it
> is a good idea to set a precedent of not honoring reserved bit as
> being reserved. The spec saying "read-only zero" can only be viewed
> as correct for the current version of the spec,

Its perfectly easy to do forward compatible changes with a spec written
in this way.

It means that new behaviours have to be opted in to, and this is how the
AMD IOMMU spec has evolved.  Notice how every new feature declares "this
bit is reserved unless $OTHER_THING is enabled."

It is also a very sane way of doing forward compatibility, from
software's point of view.

> or else why would
> the bits be called "reserved" rather than e.g. "read-as-zero"?

Read Table 1, but it also ought to be obvious.  "Reserved", "Resv" and
"Res" are all shorter to write than "read-as-zero", especially in the
diagrams of a few individual bits in a register.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 146828: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146828 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146828/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-arm64-xsm  broken
 build-armhf  broken
 build-amd64   4 host-install(4)broken REGR. vs. 146767
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 146767
 build-armhf   4 host-install(4)broken REGR. vs. 146767

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a

version targeted for testing:
 xen  0a9c44486b901dbcef7c5e583d1a5ffbf4762bc5
baseline version:
 xen  72dbcf0c065037dddb591a072c4f8f16fe888ea8

Last test of basis   146767  2020-02-06 17:01:03 Z3 days
Failing since146806  2020-02-08 13:00:53 Z2 days9 attempts
Testing same since   146828  2020-02-10 11:00:49 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Jan Beulich 
  Jeff Kubascik 
  Julien Grall 
  Julien Grall 
  Marek Marczykowski-Górecki 
  Wei Liu 

jobs:
 build-arm64-xsm  broken  
 build-amd64  broken  
 build-armhf  broken  
 build-amd64-libvirt  blocked 
 test-armhf-armhf-xl  blocked 
 test-arm64-arm64-xl-xsm  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64blocked 
 test-amd64-amd64-libvirt blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-amd64 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-armhf host-install(4)

Not pushing.


commit 0a9c44486b901dbcef7c5e583d1a5ffbf4762bc5
Author: Andrew Cooper 
Date:   Wed Feb 5 16:50:53 2020 +

tools/python: Drop cpuid helpers

These are believed-unused, and the underlying infrastructure is about to be
rewritten completely.

Signed-off-by: Andrew Cooper 
Acked-by: Marek Marczykowski-Górecki 

commit 69da7d5440c609c57c5bba9a73b91c62ba2852e6
Author: Jeff Kubascik 
Date:   Tue Feb 4 14:51:50 2020 -0500

xen/arm: Handle unimplemented VGICv3 registers as RAZ/WI

Per the ARM Generic Interrupt Controller Architecture Specification (ARM
IHI 0069E), reserved registers should generally be treated as RAZ/WI.
To simplify the VGICv3 design and improve guest compatibility, treat the
default case for GICD and GICR registers as read_as_zero/write_ignore.

Signed-off-by: Jeff Kubascik 
Acked-by: Julien Grall 

commit cbd1a54f6dea3f4a7feed51e189ebae50ac9dd15
Author: Julien Grall 
Date:   Tue Feb 4 16:53:49 2020 +

xen/include: Fix typoes in asm-x86/domain.h

Signed-off-by: Julien Grall 
Acked-by: Andrew Cooper 

commit 24ea7abcdc8654ad2d9831a79c6d6f580aca6a3c
Author: Julien Grall 
Date:   Thu Feb 6 15:41:18 2020 +

xen/include: public: Document the padding in struct xen_hvm_param

There is an implicit padding of 2 bytes in struct xen_hvm_param between
the field domid and index. Make it explicit by introduce a padding
field. This can also serve as documentation.

Note that I don't think we can mandate it to be zero because a guest may
not have initialized the padding.

Signed-off-by: Julien Grall 
Acked-by: Jan Beulich 
Acked-by: Wei Liu 
(qemu changes not included)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org

Re: [Xen-devel] [PATCH 4/4] AMD/IOMMU: Treat head/tail pointers as byte offsets

2020-02-10 Thread Jan Beulich
On 10.02.2020 15:55, Andrew Cooper wrote:
> On 05/02/2020 10:36, Jan Beulich wrote:
>> On 03.02.2020 15:43, Andrew Cooper wrote:
>>> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
>>> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
>>> @@ -24,16 +24,14 @@ static int queue_iommu_command(struct amd_iommu *iommu, 
>>> u32 cmd[])
>>>  {
>>>  uint32_t tail, head;
>>>  
>>> -tail = iommu->cmd_buffer.tail;
>>> -if ( ++tail == iommu->cmd_buffer.entries )
>>> +tail = iommu->cmd_buffer.tail + IOMMU_CMD_BUFFER_ENTRY_SIZE;
>>> +if ( tail == iommu->cmd_buffer.size )
>>>  tail = 0;
>>>  
>>> -head = iommu_get_rb_pointer(readl(iommu->mmio_base +
>>> -  IOMMU_CMD_BUFFER_HEAD_OFFSET));
>>> +head = readl(iommu->mmio_base + IOMMU_CMD_BUFFER_HEAD_OFFSET);
>>>  if ( head != tail )
>> Surely you want to mask off reserved (or more generally
>> unrelated) bits, before consuming the value for the purpose
>> here (and elsewhere below)?
> 
> Reserved bits are defined in the IOMMU spec to be read-only zero.
> 
> It is also undefined behaviour for this value to ever be outside of the
> size configured for command buffer, so using the value like this is spec
> compliant.
> 
> As for actually masking the values, that breaks the optimisers ability
> to construct commands in the command ring.  This aspect can be worked
> around with other code changes, but I also think it is implausible that
> the remaining reserved bits here are going to sprout incompatible future
> uses.

Implausible - perhaps. But impossible - no. There could be a simple
flag bit appearing somewhere in these registers. I simply don't it
is a good idea to set a precedent of not honoring reserved bit as
being reserved. The spec saying "read-only zero" can only be viewed
as correct for the current version of the spec, or else why would
the bits be called "reserved" rather than e.g. "read-as-zero"?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 3/4] AMD/IOMMU: Treat guest head/tail pointers as byte offsets

2020-02-10 Thread Andrew Cooper
On 05/02/2020 10:22, Jan Beulich wrote:
> On 03.02.2020 15:43, Andrew Cooper wrote:
>> The MMIO registers as already byte offsets.  By masking out the reserved bits
>> suitably in guest_iommu_mmio_write64(), we can use the values directly,
>> instead of masking/shifting on every use.
> I guess it's unclear whether such masking matches real hardware
> behavior, but it's certainly within spec with all other bits
> there reserved.
>
>> Store the buffer size, rather than the number of entries, to keep the same
>> units for comparison purposes.
>>
>> This simplifies guest_iommu_get_table_mfn() by dropping the entry_size
>> parameter, and simplifies the map_domain_page() handling by being able to 
>> drop
>> the log_base variables.
>>
>> No functional change.
> Well, not exactly - reads of those head/tail registers previously
> returned the last written value afaict.

It is actually a bugfix, because the spec prohibits preservation of
reserved bits.  I'll adjust the commit message to indicate this.

>
>> Signed-off-by: Andrew Cooper 
> Acked-by: Jan Beulich 


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/4] AMD/IOMMU: Delete iommu_{get, set, clear}_bit() helpers

2020-02-10 Thread Jan Beulich
On 10.02.2020 15:39, Andrew Cooper wrote:
> On 05/02/2020 09:57, Jan Beulich wrote:
>> On 03.02.2020 15:43, Andrew Cooper wrote:
>>> @@ -85,16 +85,14 @@ static void flush_command_buffer(struct amd_iommu 
>>> *iommu)
>>>  loop_count = 1000;
>>>  do {
>>>  status = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>>> -comp_wait = get_field_from_reg_u32(status,
>>> -   IOMMU_STATUS_COMP_WAIT_INT_MASK,
>>> -   
>>> IOMMU_STATUS_COMP_WAIT_INT_SHIFT);
>>> +comp_wait = status & IOMMU_STATUS_COMP_WAIT_INT;
>> Unless you also change comp_wait to bool, this just happens to
>> be correct this way because of the bit checked being at a low
>> enough position.
>>
>>> --- a/xen/drivers/passthrough/amd/iommu_init.c
>>> +++ b/xen/drivers/passthrough/amd/iommu_init.c
>>> @@ -351,13 +351,12 @@ static void iommu_reset_log(struct amd_iommu *iommu,
>>>  BUG_ON(!iommu || ((log != >event_log) && (log != 
>>> >ppr_log)));
>>>  
>>>  run_bit = ( log == >event_log ) ?
>>> -IOMMU_STATUS_EVENT_LOG_RUN_SHIFT :
>>> -IOMMU_STATUS_PPR_LOG_RUN_SHIFT;
>>> +IOMMU_STATUS_EVENT_LOG_RUN : IOMMU_STATUS_PPR_LOG_RUN;
>>>  
>>>  /* wait until EventLogRun bit = 0 */
>>>  do {
>>>  entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>>> -log_run = iommu_get_bit(entry, run_bit);
>>> +log_run = entry & run_bit;
>> Same here for log_run then. I also think run_bit would better
>> become unsigned int then.
>>
>> With these taken care of
>> Reviewed-by: Jan Beulich 
> 
> I have separate cleanup doing that (and more).  I don't want to conflate
> unrelated changes - this patch is complicated enough to follow.

But strictly speaking the assignments end up wrong this way. If
you really think such (benign) wrongness is okay, then may I
please ask that you point out this aspect in half a sentence in
the description?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/4] AMD/IOMMU: Move headers to be local

2020-02-10 Thread Jan Beulich
On 10.02.2020 15:30, Andrew Cooper wrote:
> On 05/02/2020 09:47, Jan Beulich wrote:
>> On 03.02.2020 15:43, Andrew Cooper wrote:
>>> We currently have amd-iommu-defs.h, amd-iommu-proto.h and amd-iommu.h, and 
>>> no
>>> references outside of the AMD IOMMU driver.
>>>
>>> Keep iommu-defs.h as is, but merge amd-iommu.h and amd-iommu-proto.h to just
>>> iommu.h, and move them both into drivers/passthrough/amd/.  (While merging,
>>> drop the bogus #pragma pack around the *_entry structures.)
>>>
>>> Take the opportunity to trim the include lists, including x86/mm/p2m.c
>> I guess you mean p2m.h here.
> 
> Why?
> 
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index def13f657b..fd9f09536d 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -38,7 +38,6 @@
>>  #include 
>>  #include 
>>  #include 
>> -#include 
>>  #include 
>>  #include 
>>  
> 
> I really do mean p2m.c

Okay, I misunderstood then. I've been judging from

>--- a/xen/drivers/passthrough/amd/iommu_map.c
>+++ b/xen/drivers/passthrough/amd/iommu_map.c
>@@ -18,12 +18,8 @@
>  */
> 
> #include 
>-#include 
>-#include 
>-#include 
>-#include 
>-#include "../ats.h"
>-#include 
>+
>+#include "iommu.h"

where you _also_ (re)move p2m.h.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 4/4] AMD/IOMMU: Treat head/tail pointers as byte offsets

2020-02-10 Thread Andrew Cooper
On 05/02/2020 10:36, Jan Beulich wrote:
> On 03.02.2020 15:43, Andrew Cooper wrote:
>> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
>> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
>> @@ -24,16 +24,14 @@ static int queue_iommu_command(struct amd_iommu *iommu, 
>> u32 cmd[])
>>  {
>>  uint32_t tail, head;
>>  
>> -tail = iommu->cmd_buffer.tail;
>> -if ( ++tail == iommu->cmd_buffer.entries )
>> +tail = iommu->cmd_buffer.tail + IOMMU_CMD_BUFFER_ENTRY_SIZE;
>> +if ( tail == iommu->cmd_buffer.size )
>>  tail = 0;
>>  
>> -head = iommu_get_rb_pointer(readl(iommu->mmio_base +
>> -  IOMMU_CMD_BUFFER_HEAD_OFFSET));
>> +head = readl(iommu->mmio_base + IOMMU_CMD_BUFFER_HEAD_OFFSET);
>>  if ( head != tail )
> Surely you want to mask off reserved (or more generally
> unrelated) bits, before consuming the value for the purpose
> here (and elsewhere below)?

Reserved bits are defined in the IOMMU spec to be read-only zero.

It is also undefined behaviour for this value to ever be outside of the
size configured for command buffer, so using the value like this is spec
compliant.

As for actually masking the values, that breaks the optimisers ability
to construct commands in the command ring.  This aspect can be worked
around with other code changes, but I also think it is implausible that
the remaining reserved bits here are going to sprout incompatible future
uses.

>
>> @@ -45,13 +43,11 @@ static int queue_iommu_command(struct amd_iommu *iommu, 
>> u32 cmd[])
>>  
>>  static void commit_iommu_command_buffer(struct amd_iommu *iommu)
>>  {
>> -u32 tail = 0;
>> -
>> -iommu_set_rb_pointer(, iommu->cmd_buffer.tail);
>> -writel(tail, iommu->mmio_base+IOMMU_CMD_BUFFER_TAIL_OFFSET);
>> +writel(iommu->cmd_buffer.tail,
>> +   iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET);
> I guess not preserving the reserved bits isn't a problem
> right now, but is doing so a good idea in general?

As above - there are by definition no bits to preserve.

>> @@ -316,22 +316,20 @@ static int iommu_read_log(struct amd_iommu *iommu,
>>  IOMMU_PPR_LOG_HEAD_OFFSET;
>>  
>>  tail = readl(iommu->mmio_base + tail_offest);
>> -tail = iommu_get_rb_pointer(tail);
>>  
>>  while ( tail != log->head )
>>  {
>>  /* read event log entry */
>> -entry = (u32 *)(log->buffer + log->head * entry_size);
>> +entry = (u32 *)(log->buffer + log->head);
> Would you mind dropping the pointless cast here at the same time?

Can do.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Jan Beulich
On 10.02.2020 15:06, Andrew Cooper wrote:
> On 10/02/2020 14:00, Jan Beulich wrote:
>> On 10.02.2020 14:56, Andrew Cooper wrote:
>>> On 10/02/2020 13:47, Jan Beulich wrote:
 On 10.02.2020 14:29, Andrew Cooper wrote:
> On 10/02/2020 13:22, Jan Beulich wrote:
>> On 08.02.2020 16:19, Andrew Cooper wrote:
>>> --- a/docs/misc/pvh.pandoc
>>> +++ b/docs/misc/pvh.pandoc
>>> @@ -23,7 +23,7 @@ following machine state:
>>>   * `cs`: must be a 32-bit read/execute code segment with a base of ‘0’
>>> and a limit of ‘0x’. The selector value is unspecified.
>>>  
>>> - * `ds`, `es`: must be a 32-bit read/write data segment with a base of
>>> + * `ds`, `es`, `ss`: must be a 32-bit read/write data segment with a 
>>> base of
>>> ‘0’ and a limit of ‘0x’. The selector values are all 
>>> unspecified.
>> Wouldn't this want accompanying with an adjustment to
>> xen/arch/x86/hvm/domain.c:check_segment(), which right now
>> isn't in line with either old or new version of this doc?
> What do you thing is missing?  It too is written with the expectation of
> %es being set up, which I checked before sending this patch.
 The function for example looks to permit zero segment attributes
 for both DS and ES. It also looks to permit non-writable
 attributes for both, and a non-readable CS.
>>> It is not a PVH-auditing function.
>>>
>>> It is reachable from plain HVM guests, and is only supposed to be a
>>> minimum set of checks to prevent a vmentry failure of the
>>> newly-initialised vcpu state.  (Whether it actually meets this goal is a
>>> separate matter.)
>> Well, that's fine, but what other place am I missing then where the
>> documented restrictions actually get enforced? Or if we don't mean
>> to enforce them, then perhaps there should be a distinction in the
>> doc between "must" and "should"?
> 
> The written ABI is the ABI.  Conforming implementations must (as in
> must) follow the rules.
> 
> The domain builder(s) are the only places which knows that the PVH start
> ABI is in use.

Looks like I got confused - I'm sorry for the noise.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/4] AMD/IOMMU: Delete iommu_{get, set, clear}_bit() helpers

2020-02-10 Thread Andrew Cooper
On 05/02/2020 09:57, Jan Beulich wrote:
> On 03.02.2020 15:43, Andrew Cooper wrote:
>> @@ -85,16 +85,14 @@ static void flush_command_buffer(struct amd_iommu *iommu)
>>  loop_count = 1000;
>>  do {
>>  status = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>> -comp_wait = get_field_from_reg_u32(status,
>> -   IOMMU_STATUS_COMP_WAIT_INT_MASK,
>> -   
>> IOMMU_STATUS_COMP_WAIT_INT_SHIFT);
>> +comp_wait = status & IOMMU_STATUS_COMP_WAIT_INT;
> Unless you also change comp_wait to bool, this just happens to
> be correct this way because of the bit checked being at a low
> enough position.
>
>> --- a/xen/drivers/passthrough/amd/iommu_init.c
>> +++ b/xen/drivers/passthrough/amd/iommu_init.c
>> @@ -351,13 +351,12 @@ static void iommu_reset_log(struct amd_iommu *iommu,
>>  BUG_ON(!iommu || ((log != >event_log) && (log != 
>> >ppr_log)));
>>  
>>  run_bit = ( log == >event_log ) ?
>> -IOMMU_STATUS_EVENT_LOG_RUN_SHIFT :
>> -IOMMU_STATUS_PPR_LOG_RUN_SHIFT;
>> +IOMMU_STATUS_EVENT_LOG_RUN : IOMMU_STATUS_PPR_LOG_RUN;
>>  
>>  /* wait until EventLogRun bit = 0 */
>>  do {
>>  entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>> -log_run = iommu_get_bit(entry, run_bit);
>> +log_run = entry & run_bit;
> Same here for log_run then. I also think run_bit would better
> become unsigned int then.
>
> With these taken care of
> Reviewed-by: Jan Beulich 

I have separate cleanup doing that (and more).  I don't want to conflate
unrelated changes - this patch is complicated enough to follow.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/4] AMD/IOMMU: Move headers to be local

2020-02-10 Thread Andrew Cooper
On 05/02/2020 09:47, Jan Beulich wrote:
> On 03.02.2020 15:43, Andrew Cooper wrote:
>> We currently have amd-iommu-defs.h, amd-iommu-proto.h and amd-iommu.h, and no
>> references outside of the AMD IOMMU driver.
>>
>> Keep iommu-defs.h as is, but merge amd-iommu.h and amd-iommu-proto.h to just
>> iommu.h, and move them both into drivers/passthrough/amd/.  (While merging,
>> drop the bogus #pragma pack around the *_entry structures.)
>>
>> Take the opportunity to trim the include lists, including x86/mm/p2m.c
> I guess you mean p2m.h here.

Why?

> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index def13f657b..fd9f09536d 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -38,7 +38,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>  #include 
>  

I really do mean p2m.c

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 3/3] AMD/IOMMU: replace a few literal numbers

2020-02-10 Thread Andrew Cooper
On 05/02/2020 09:43, Jan Beulich wrote:
> Introduce IOMMU_PDE_NEXT_LEVEL_{MIN,MAX} to replace literal 1, 6, and 7
> instances. While doing so replace two uses of memset() by initializers.
>
> Signed-off-by: Jan Beulich 

This does not look to be an improvement.  IOMMU_PDE_NEXT_LEVEL_MIN is
definitely bogus, and in all cases, a literal 1 is better, because that
is how we describe pagetable levels.

Something to replace literal 6/7 probably is ok, but doesn't want to be
done like this.

The majority of the problems here as caused by iommu_pde_from_dfn()'s
silly ABI.  The pt_mfn[] array is problematic (because it is used as a
1-based array, not 0-based) and useless because both callers only want
the 4k-equivelent mfn.  Fixing the ABI gets rid of quite a lot of wasted
stack space, every use of '1', and every upper bound other than the bug
on and amd_iommu_get_paging_mode().

> ---
> TBD: We should really honor the hats field of union
>  amd_iommu_ext_features, but the specification (or at least the
>  parts I did look at in the course of putting together this patch)
>  is unclear about the maximum valid value in case EFRSup is clear.

It is available from PCI config space (Misc0 register, cap+0x10) even on
first gen IOMMUs, and the IVRS table in Type 10.

I'm honestly not sure why the information was duplicated into EFR, other
than perhaps for providing the information in a more useful format.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Andrew Cooper
On 10/02/2020 14:00, Jan Beulich wrote:
> On 10.02.2020 14:56, Andrew Cooper wrote:
>> On 10/02/2020 13:47, Jan Beulich wrote:
>>> On 10.02.2020 14:29, Andrew Cooper wrote:
 On 10/02/2020 13:22, Jan Beulich wrote:
> On 08.02.2020 16:19, Andrew Cooper wrote:
>> --- a/docs/misc/pvh.pandoc
>> +++ b/docs/misc/pvh.pandoc
>> @@ -23,7 +23,7 @@ following machine state:
>>   * `cs`: must be a 32-bit read/execute code segment with a base of ‘0’
>> and a limit of ‘0x’. The selector value is unspecified.
>>  
>> - * `ds`, `es`: must be a 32-bit read/write data segment with a base of
>> + * `ds`, `es`, `ss`: must be a 32-bit read/write data segment with a 
>> base of
>> ‘0’ and a limit of ‘0x’. The selector values are all 
>> unspecified.
> Wouldn't this want accompanying with an adjustment to
> xen/arch/x86/hvm/domain.c:check_segment(), which right now
> isn't in line with either old or new version of this doc?
 What do you thing is missing?  It too is written with the expectation of
 %es being set up, which I checked before sending this patch.
>>> The function for example looks to permit zero segment attributes
>>> for both DS and ES. It also looks to permit non-writable
>>> attributes for both, and a non-readable CS.
>> It is not a PVH-auditing function.
>>
>> It is reachable from plain HVM guests, and is only supposed to be a
>> minimum set of checks to prevent a vmentry failure of the
>> newly-initialised vcpu state.  (Whether it actually meets this goal is a
>> separate matter.)
> Well, that's fine, but what other place am I missing then where the
> documented restrictions actually get enforced? Or if we don't mean
> to enforce them, then perhaps there should be a distinction in the
> doc between "must" and "should"?

The written ABI is the ABI.  Conforming implementations must (as in
must) follow the rules.

The domain builder(s) are the only places which knows that the PVH start
ABI is in use.

Xen does not know, and therefore cannot legitimately check.  This
hypercall is used for more than just the PVH starting ABI, so must (as
it must) not have any PVH-ABI checks.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Jan Beulich
On 10.02.2020 14:56, Andrew Cooper wrote:
> On 10/02/2020 13:47, Jan Beulich wrote:
>> On 10.02.2020 14:29, Andrew Cooper wrote:
>>> On 10/02/2020 13:22, Jan Beulich wrote:
 On 08.02.2020 16:19, Andrew Cooper wrote:
> --- a/docs/misc/pvh.pandoc
> +++ b/docs/misc/pvh.pandoc
> @@ -23,7 +23,7 @@ following machine state:
>   * `cs`: must be a 32-bit read/execute code segment with a base of ‘0’
> and a limit of ‘0x’. The selector value is unspecified.
>  
> - * `ds`, `es`: must be a 32-bit read/write data segment with a base of
> + * `ds`, `es`, `ss`: must be a 32-bit read/write data segment with a 
> base of
> ‘0’ and a limit of ‘0x’. The selector values are all 
> unspecified.
 Wouldn't this want accompanying with an adjustment to
 xen/arch/x86/hvm/domain.c:check_segment(), which right now
 isn't in line with either old or new version of this doc?
>>> What do you thing is missing?  It too is written with the expectation of
>>> %es being set up, which I checked before sending this patch.
>> The function for example looks to permit zero segment attributes
>> for both DS and ES. It also looks to permit non-writable
>> attributes for both, and a non-readable CS.
> 
> It is not a PVH-auditing function.
> 
> It is reachable from plain HVM guests, and is only supposed to be a
> minimum set of checks to prevent a vmentry failure of the
> newly-initialised vcpu state.  (Whether it actually meets this goal is a
> separate matter.)

Well, that's fine, but what other place am I missing then where the
documented restrictions actually get enforced? Or if we don't mean
to enforce them, then perhaps there should be a distinction in the
doc between "must" and "should"?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/3] AMD/IOMMU: drop redundant code

2020-02-10 Thread Andrew Cooper
On 05/02/2020 09:42, Jan Beulich wrote:
> The level 1 special exit path is unnecessary in iommu_pde_from_dfn() -
> the subsequent code takes care of this case quite fine.
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Andrew Cooper
On 10/02/2020 13:47, Jan Beulich wrote:
> On 10.02.2020 14:29, Andrew Cooper wrote:
>> On 10/02/2020 13:22, Jan Beulich wrote:
>>> On 08.02.2020 16:19, Andrew Cooper wrote:
 --- a/docs/misc/pvh.pandoc
 +++ b/docs/misc/pvh.pandoc
 @@ -23,7 +23,7 @@ following machine state:
   * `cs`: must be a 32-bit read/execute code segment with a base of ‘0’
 and a limit of ‘0x’. The selector value is unspecified.
  
 - * `ds`, `es`: must be a 32-bit read/write data segment with a base of
 + * `ds`, `es`, `ss`: must be a 32-bit read/write data segment with a base 
 of
 ‘0’ and a limit of ‘0x’. The selector values are all 
 unspecified.
>>> Wouldn't this want accompanying with an adjustment to
>>> xen/arch/x86/hvm/domain.c:check_segment(), which right now
>>> isn't in line with either old or new version of this doc?
>> What do you thing is missing?  It too is written with the expectation of
>> %es being set up, which I checked before sending this patch.
> The function for example looks to permit zero segment attributes
> for both DS and ES. It also looks to permit non-writable
> attributes for both, and a non-readable CS.

It is not a PVH-auditing function.

It is reachable from plain HVM guests, and is only supposed to be a
minimum set of checks to prevent a vmentry failure of the
newly-initialised vcpu state.  (Whether it actually meets this goal is a
separate matter.)

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/3] AMD/IOMMU: fix off-by-one in amd_iommu_get_paging_mode() callers

2020-02-10 Thread Andrew Cooper
On 05/02/2020 09:42, Jan Beulich wrote:
> amd_iommu_get_paging_mode() expects a count, not a "maximum possible"
> value. Prior to b4f042236ae0 dropping the reference, the use of our mis-
> named "max_page" in amd_iommu_domain_init() may have lead to such a
> misunderstanding.
>
> Also replace a literal 4 by an expression tying it to a wider use
> constant, just like amd_iommu_quarantine_init() does.
>
> Fixes: ea38867831da ("x86 / iommu: set up a scratch page in the quarantine 
> domain")
> Fixes: b4f042236ae0 ("AMD/IOMMU: Cease using a dynamic height for the IOMMU 
> pagetables")
> Signed-off-by: Jan Beulich 
> ---
> Note: I'm not at the same time adding error checking here, despite
>   amd_iommu_get_paging_mode() possibly returning one, as I think
>   that's a sufficiently orthogonal aspect.

It is entirely non-obvious what amd_iommu_get_paging_mode() takes, which
is presumably what has led to this confusion.

It also seems silly to force a call into another translation unit when
2/3 of the callers can be evaluated at compile time.

How about re-implementing amd_iommu_get_paging_mode() as a static inline
(seeing as it is just basic arithmetic), and naming its parameter in a
more useful, e.g. max_frames ?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Jan Beulich
On 10.02.2020 14:29, Andrew Cooper wrote:
> On 10/02/2020 13:22, Jan Beulich wrote:
>> On 08.02.2020 16:19, Andrew Cooper wrote:
>>> --- a/docs/misc/pvh.pandoc
>>> +++ b/docs/misc/pvh.pandoc
>>> @@ -23,7 +23,7 @@ following machine state:
>>>   * `cs`: must be a 32-bit read/execute code segment with a base of ‘0’
>>> and a limit of ‘0x’. The selector value is unspecified.
>>>  
>>> - * `ds`, `es`: must be a 32-bit read/write data segment with a base of
>>> + * `ds`, `es`, `ss`: must be a 32-bit read/write data segment with a base 
>>> of
>>> ‘0’ and a limit of ‘0x’. The selector values are all 
>>> unspecified.
>> Wouldn't this want accompanying with an adjustment to
>> xen/arch/x86/hvm/domain.c:check_segment(), which right now
>> isn't in line with either old or new version of this doc?
> 
> What do you thing is missing?  It too is written with the expectation of
> %es being set up, which I checked before sending this patch.

The function for example looks to permit zero segment attributes
for both DS and ES. It also looks to permit non-writable
attributes for both, and a non-readable CS.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Andrew Cooper
On 10/02/2020 13:22, Jan Beulich wrote:
> On 08.02.2020 16:19, Andrew Cooper wrote:
>> --- a/docs/misc/pvh.pandoc
>> +++ b/docs/misc/pvh.pandoc
>> @@ -23,7 +23,7 @@ following machine state:
>>   * `cs`: must be a 32-bit read/execute code segment with a base of ‘0’
>> and a limit of ‘0x’. The selector value is unspecified.
>>  
>> - * `ds`, `es`: must be a 32-bit read/write data segment with a base of
>> + * `ds`, `es`, `ss`: must be a 32-bit read/write data segment with a base of
>> ‘0’ and a limit of ‘0x’. The selector values are all unspecified.
> Wouldn't this want accompanying with an adjustment to
> xen/arch/x86/hvm/domain.c:check_segment(), which right now
> isn't in line with either old or new version of this doc?

What do you thing is missing?  It too is written with the expectation of
%es being set up, which I checked before sending this patch.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 4/4] xen/netback: fix grant copy across page boundary

2020-02-10 Thread Sergey Dyasli
On 07/02/2020 14:36, David Miller wrote:
> From: Sergey Dyasli 
> Date: Fri, 7 Feb 2020 14:26:52 +
>
>> From: Ross Lagerwall 
>>
>> When KASAN (or SLUB_DEBUG) is turned on, there is a higher chance that
>> non-power-of-two allocations are not aligned to the next power of 2 of
>> the size. Therefore, handle grant copies that cross page boundaries.
>>
>> Signed-off-by: Ross Lagerwall 
>> Signed-off-by: Sergey Dyasli 
>> Acked-by: Paul Durrant 
>
> This is part of a larger patch series to which netdev was not CC:'d
>
> Where is this patch targetted to be applied?
>
> Do you expect a networking ACK on this?
>
> Please do not submit patches in such an ambiguous manner like this
> in the future, thank you.

Please see the following for more context:

https://lore.kernel.org/linux-mm/20200122140512.zxtld5sanohpmgt2@debian/

Sorry for not providing enough context with this submission.

--
Thanks,
Sergey

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Jan Beulich
On 08.02.2020 16:19, Andrew Cooper wrote:
> --- a/docs/misc/pvh.pandoc
> +++ b/docs/misc/pvh.pandoc
> @@ -23,7 +23,7 @@ following machine state:
>   * `cs`: must be a 32-bit read/execute code segment with a base of ‘0’
> and a limit of ‘0x’. The selector value is unspecified.
>  
> - * `ds`, `es`: must be a 32-bit read/write data segment with a base of
> + * `ds`, `es`, `ss`: must be a 32-bit read/write data segment with a base of
> ‘0’ and a limit of ‘0x’. The selector values are all unspecified.

Wouldn't this want accompanying with an adjustment to
xen/arch/x86/hvm/domain.c:check_segment(), which right now
isn't in line with either old or new version of this doc?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [OSSTEST PATCH] production-config: Update TftpDiVersion_stretch

2020-02-10 Thread Ian Jackson
Signed-off-by: Ian Jackson 
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index 41f68409..103b8915 100644
--- a/production-config
+++ b/production-config
@@ -90,7 +90,7 @@ TftpNetbootGroup osstest
 # Update with ./mg-debian-installer-update(-all)
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
-TftpDiVersion_stretch 2019-09-10
+TftpDiVersion_stretch 2020-02-10
 
 DebianSnapshotBackports_jessie 
http://snapshot.debian.org/archive/debian/20190206T211314Z/
 
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Julien Grall



On 10/02/2020 12:32, Jan Beulich wrote:

On 10.02.2020 13:21, Julien Grall wrote:

Hi,

On 10/02/2020 11:59, Jan Beulich wrote:

On 10.02.2020 12:00, Julien Grall wrote:

On 10/02/2020 10:28, Jan Beulich wrote:

On 10.02.2020 10:45, Julien Grall wrote:

Please suggest a new name for BIT_WORD() and we can repurpose it. So
far, I have no idea how to rename it.


_BIT_WORD() if you/we were to accept the name space violation, or
BITMAP_WORD()?


BITMAP_WORD() is misleading as bitmap are using unsigned long. So my
preference is _BIT_WORD().

Another alternative would be ATOMIC_WORD().


Except that there are also non-atomic bitmap operations, I don't really
care about the name as long as it's not BIT_WORD() (or anything else
that's likely to collide with other stuff.


I am afraid we are disagreing on what is colliding with what here. The
naming on Arm has been like that for the past few years. While this may
not have been the best choice, this is your suggestion colliding with
what is existing.


It is a plain import from Linux which has turned out impossible
because of the change that was done at some point to Arm code
which, I guess, also originally came from Linux. There's no new
naming I've been suggesting here at all.


We never claimed we would be fully compatible with Linux and I don't 
think we could every claim it. Particularly, the bitop operations are 
different given Linux bitops are based on unsigned long.


The bitop did indeed came from Linux originally, however we had to adapt 
them because Linux Armv8 bitop was expecting 8-byte aligned. This does 
not hold on Xen.





I am not entirely fussed about the namespace violation, although I think
the name is potentially misleading. Yet, I would be happy to use
_BIT_WORD() as this is the best of it so far.

While this is code falls under Arm maintainership, I am still happy to
consider other naming. But at this point, you should be the one suggesting.


BIT_UNIT() or BITOP_UNIT() or BITOP_WORD()?


BITOP_WORD().

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Xen fails to resume on AMD Fam15h (and Fam17h?) because of CPUID mismatch

2020-02-10 Thread Andrew Cooper
On 10/02/2020 12:14, Marek Marczykowski-Górecki wrote:
> On Mon, Feb 10, 2020 at 11:17:34AM +, Andrew Cooper wrote:
>> On 10/02/2020 08:55, Jan Beulich wrote:
>>> On 10.02.2020 00:06, Marek Marczykowski-Górecki wrote:
 Hi,

 Multiple Qubes users have reported issues with resuming from S3 on AMD
 systems (Ryzen 2500U, Ryzen Pro 3700U, maybe more). The error message
 is:

 (XEN) CPU0: cap[ 1] is 7ed8320b (expected f6d8320b)

 If I read it right, this is:
   - OSXSAVE: 0 -> 1
   - HYPERVISOR: 1 -> 0

 Commenting out the panic on a failed recheck_cpu_features() in power.c
 makes the system work after resume, reportedly stable. But that doesn't
 sounds like a good idea generally.

 Is this difference a Xen fault (some missing MSR / other register
 restore on resume)? Or BIOS vendor / AMD, that could be worked around in
 Xen?
>>> The transition of the HYPERVISOR bit is definitely a Xen issue,
>>> with Andrew having sent a patch already (iirc).
>> https://lore.kernel.org/xen-devel/20200127202121.2961-1-andrew.coop...@citrix.com/
>>
>> Code is correct.  Commit message needs rework, including in light of
>> this discovery.  (I may eventually split it into two patches.)
> Claudia, do you want to test with this patch?
>
>>> The OSXSAVE part is a little more surprising,
>> Not to me.  The checks only care if feature bits have gone missing, not
>> if new ones have appeared.
>>
>> mmu_cr4_features includes OSXSAVE (from much later on boot than features
>> get cached), so the s3 path observing the gain of OSXSAVE will have been
>> happening ever since the checks were introduced (even on Intel.)
> Is "x86: store cr4 during suspend/resume" patch from Roger related to
> this?

No.  It wouldn't have any effect on this issue, and hasn't/won't been
taken for the reasons I described out in the email chain.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] tools/configure: generate stubs and long-double 32-bit headers if needed

2020-02-10 Thread Roger Pau Monné
On Sun, Feb 09, 2020 at 08:35:14PM -0800, Christopher Clark wrote:
> The gnu/stubs-32.h and bits/long-double-32.h headers are required to
> build hvmloader but are not always available in 64-bit build
> environments. To avoid introducing a build requirement on 32-bit
> multilib generate versions of them from the 64-bit equivalent header.
> 
> This patch enables the removal of downstream patching that has been
> carried in the Yocto/OpenEmbedded meta-virtualization layer since 2012.
> 
> Signed-off-by: Christopher Clark 

I think this would be better done in tools/include as part of
populating tools/include.

Also this looks specific to using gcc to build the tools, which could
be skipped when building with clang?

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Jan Beulich
On 10.02.2020 13:21, Julien Grall wrote:
> Hi,
> 
> On 10/02/2020 11:59, Jan Beulich wrote:
>> On 10.02.2020 12:00, Julien Grall wrote:
>>> On 10/02/2020 10:28, Jan Beulich wrote:
 On 10.02.2020 10:45, Julien Grall wrote:
> Please suggest a new name for BIT_WORD() and we can repurpose it. So
> far, I have no idea how to rename it.

 _BIT_WORD() if you/we were to accept the name space violation, or
 BITMAP_WORD()?
>>>
>>> BITMAP_WORD() is misleading as bitmap are using unsigned long. So my
>>> preference is _BIT_WORD().
>>>
>>> Another alternative would be ATOMIC_WORD().
>>
>> Except that there are also non-atomic bitmap operations, I don't really
>> care about the name as long as it's not BIT_WORD() (or anything else
>> that's likely to collide with other stuff.
> 
> I am afraid we are disagreing on what is colliding with what here. The 
> naming on Arm has been like that for the past few years. While this may 
> not have been the best choice, this is your suggestion colliding with 
> what is existing.

It is a plain import from Linux which has turned out impossible
because of the change that was done at some point to Arm code
which, I guess, also originally came from Linux. There's no new
naming I've been suggesting here at all.

> I am not entirely fussed about the namespace violation, although I think 
> the name is potentially misleading. Yet, I would be happy to use 
> _BIT_WORD() as this is the best of it so far.
> 
> While this is code falls under Arm maintainership, I am still happy to 
> consider other naming. But at this point, you should be the one suggesting.

BIT_UNIT() or BITOP_UNIT() or BITOP_WORD()?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Julien Grall

Hi,

On 10/02/2020 11:59, Jan Beulich wrote:

On 10.02.2020 12:00, Julien Grall wrote:

On 10/02/2020 10:28, Jan Beulich wrote:

On 10.02.2020 10:45, Julien Grall wrote:

Please suggest a new name for BIT_WORD() and we can repurpose it. So
far, I have no idea how to rename it.


_BIT_WORD() if you/we were to accept the name space violation, or
BITMAP_WORD()?


BITMAP_WORD() is misleading as bitmap are using unsigned long. So my
preference is _BIT_WORD().

Another alternative would be ATOMIC_WORD().


Except that there are also non-atomic bitmap operations, I don't really
care about the name as long as it's not BIT_WORD() (or anything else
that's likely to collide with other stuff.


I am afraid we are disagreing on what is colliding with what here. The 
naming on Arm has been like that for the past few years. While this may 
not have been the best choice, this is your suggestion colliding with 
what is existing.


I am not entirely fussed about the namespace violation, although I think 
the name is potentially misleading. Yet, I would be happy to use 
_BIT_WORD() as this is the best of it so far.


While this is code falls under Arm maintainership, I am still happy to 
consider other naming. But at this point, you should be the one suggesting.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Xen fails to resume on AMD Fam15h (and Fam17h?) because of CPUID mismatch

2020-02-10 Thread Marek Marczykowski-Górecki
On Mon, Feb 10, 2020 at 11:17:34AM +, Andrew Cooper wrote:
> On 10/02/2020 08:55, Jan Beulich wrote:
> > On 10.02.2020 00:06, Marek Marczykowski-Górecki wrote:
> >> Hi,
> >>
> >> Multiple Qubes users have reported issues with resuming from S3 on AMD
> >> systems (Ryzen 2500U, Ryzen Pro 3700U, maybe more). The error message
> >> is:
> >>
> >> (XEN) CPU0: cap[ 1] is 7ed8320b (expected f6d8320b)
> >>
> >> If I read it right, this is:
> >>   - OSXSAVE: 0 -> 1
> >>   - HYPERVISOR: 1 -> 0
> >>
> >> Commenting out the panic on a failed recheck_cpu_features() in power.c
> >> makes the system work after resume, reportedly stable. But that doesn't
> >> sounds like a good idea generally.
> >>
> >> Is this difference a Xen fault (some missing MSR / other register
> >> restore on resume)? Or BIOS vendor / AMD, that could be worked around in
> >> Xen?
> > The transition of the HYPERVISOR bit is definitely a Xen issue,
> > with Andrew having sent a patch already (iirc).
> 
> https://lore.kernel.org/xen-devel/20200127202121.2961-1-andrew.coop...@citrix.com/
> 
> Code is correct.  Commit message needs rework, including in light of
> this discovery.  (I may eventually split it into two patches.)

Claudia, do you want to test with this patch?

> > The OSXSAVE part is a little more surprising,
> 
> Not to me.  The checks only care if feature bits have gone missing, not
> if new ones have appeared.
> 
> mmu_cr4_features includes OSXSAVE (from much later on boot than features
> get cached), so the s3 path observing the gain of OSXSAVE will have been
> happening ever since the checks were introduced (even on Intel.)

Is "x86: store cr4 during suspend/resume" patch from Roger related to
this?

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable test] 146815: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146815 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146815/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm   broken
 build-armhf-pvopsbroken
 build-armhf  broken
 build-i386-pvops broken
 build-i386-prev  broken
 build-amd64-xsm  broken
 build-arm64-xsm  broken
 build-amd64-xtf  broken
 build-arm64-pvopsbroken
 build-amd64-pvopsbroken
 build-amd64-prev broken
 build-arm64  broken
 build-amd64  broken
 build-i386   broken
 build-i386-pvops  4 host-install(4)broken REGR. vs. 146796
 build-i3864 host-install(4)broken REGR. vs. 146796
 build-amd64-xtf   4 host-install(4)broken REGR. vs. 146796
 build-amd64   4 host-install(4)broken REGR. vs. 146796
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 146796
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 146796
 build-arm64   4 host-install(4)broken REGR. vs. 146796
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 146796
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 146796
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 146796
 build-i386-xsm4 host-install(4)broken REGR. vs. 146796
 build-amd64-prev  4 host-install(4)broken REGR. vs. 146796
 build-i386-prev   4 host-install(4)broken REGR. vs. 146796
 build-armhf   4 host-install(4)broken REGR. vs. 146796

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-31 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-examine  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 

Re: [Xen-devel] [PATCH] x86/svm: Reduce vmentry latency

2020-02-10 Thread Roger Pau Monné
On Mon, Feb 10, 2020 at 11:42:06AM +, Andrew Cooper wrote:
> Writing to the stack pointer in the middle of a line of pop operations is
> specifically recommended against by the optimisation guide, and is a technique
> used by Speculative Load Hardening to combat SpectreRSB.
> 
> In practice, it causes all further stack-relative accesses to block until the
> write to the stack pointer retires, so the stack engine can get back in sync.
> 
> Pop into any dead register to discard %rax's value without clobbering the
> stack engine.  Smaller compiled code, and runs faster.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Roger Pau Monné 

Thanks.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Jan Beulich
On 10.02.2020 12:00, Julien Grall wrote:
> On 10/02/2020 10:28, Jan Beulich wrote:
>> On 10.02.2020 10:45, Julien Grall wrote:
>>> Please suggest a new name for BIT_WORD() and we can repurpose it. So
>>> far, I have no idea how to rename it.
>>
>> _BIT_WORD() if you/we were to accept the name space violation, or
>> BITMAP_WORD()?
> 
> BITMAP_WORD() is misleading as bitmap are using unsigned long. So my 
> preference is _BIT_WORD().
> 
> Another alternative would be ATOMIC_WORD().

Except that there are also non-atomic bitmap operations, I don't really
care about the name as long as it's not BIT_WORD() (or anything else
that's likely to collide with other stuff).

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] x86/svm: Reduce vmentry latency

2020-02-10 Thread Andrew Cooper
Writing to the stack pointer in the middle of a line of pop operations is
specifically recommended against by the optimisation guide, and is a technique
used by Speculative Load Hardening to combat SpectreRSB.

In practice, it causes all further stack-relative accesses to block until the
write to the stack pointer retires, so the stack engine can get back in sync.

Pop into any dead register to discard %rax's value without clobbering the
stack engine.  Smaller compiled code, and runs faster.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Wei Liu 
CC: Roger Pau Monné 

In a small test where I wired ICEBP to tighly re-enter the guest, this dropped
the guests perviced time for ICEBP (as close to one vmexit and entry as I
could realistically manage) by 20 ticks.  Sadly, that also seems to be the
granuarlity of measurement.  The modal measurement (accounting for 80% of
samples) was 1200 ticks, and reduced to 1180 with just this change in place.
---
 xen/arch/x86/hvm/svm/entry.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index e954d8e021..1d2df08e89 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -76,7 +76,7 @@ __UNLIKELY_END(nsvm_hap)
 pop  %r10
 pop  %r9
 pop  %r8
-add  $8,%rsp /* Skip %rax: restored by VMRUN. */
+pop  %rcx /* Skip %rax: restored by VMRUN. */
 pop  %rcx
 pop  %rdx
 pop  %rsi
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Xen fails to resume on AMD Fam15h (and Fam17h?) because of CPUID mismatch

2020-02-10 Thread Andrew Cooper
On 10/02/2020 08:55, Jan Beulich wrote:
> On 10.02.2020 00:06, Marek Marczykowski-Górecki wrote:
>> Hi,
>>
>> Multiple Qubes users have reported issues with resuming from S3 on AMD
>> systems (Ryzen 2500U, Ryzen Pro 3700U, maybe more). The error message
>> is:
>>
>> (XEN) CPU0: cap[ 1] is 7ed8320b (expected f6d8320b)
>>
>> If I read it right, this is:
>>   - OSXSAVE: 0 -> 1
>>   - HYPERVISOR: 1 -> 0
>>
>> Commenting out the panic on a failed recheck_cpu_features() in power.c
>> makes the system work after resume, reportedly stable. But that doesn't
>> sounds like a good idea generally.
>>
>> Is this difference a Xen fault (some missing MSR / other register
>> restore on resume)? Or BIOS vendor / AMD, that could be worked around in
>> Xen?
> The transition of the HYPERVISOR bit is definitely a Xen issue,
> with Andrew having sent a patch already (iirc).

https://lore.kernel.org/xen-devel/20200127202121.2961-1-andrew.coop...@citrix.com/

Code is correct.  Commit message needs rework, including in light of
this discovery.  (I may eventually split it into two patches.)

> The OSXSAVE part is a little more surprising,

Not to me.  The checks only care if feature bits have gone missing, not
if new ones have appeared.

mmu_cr4_features includes OSXSAVE (from much later on boot than features
get cached), so the s3 path observing the gain of OSXSAVE will have been
happening ever since the checks were introduced (even on Intel.)

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Julien Grall



On 10/02/2020 10:28, Jan Beulich wrote:

On 10.02.2020 10:45, Julien Grall wrote:

Please suggest a new name for BIT_WORD() and we can repurpose it. So
far, I have no idea how to rename it.


_BIT_WORD() if you/we were to accept the name space violation, or
BITMAP_WORD()?


BITMAP_WORD() is misleading as bitmap are using unsigned long. So my 
preference is _BIT_WORD().


Another alternative would be ATOMIC_WORD().

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/pvh: Fix segment selector ABI

2020-02-10 Thread Roger Pau Monné
On Sat, Feb 08, 2020 at 03:19:39PM +, Andrew Cooper wrote:
> The written ABI states that %es will be set up, but libxc doesn't do so.  In
> practice, it breaks `rep movs` inside guests before they reload %es.
> 
> The written ABI doesn't mention %ss, but libxc does set it up.  Having %ds
> different to %ss is obnoxous to work with, as different registers have
> different implicit segments.
> 
> Modify the spec to state that %ss is set up as a flat read/write segment.
> This a) matches the Multiboot 1 spec, b) matches what is set up in practice,
> and c) is the more sane behaviour for guests to use.

Fixes: 68e1183411b ('libxc: introduce a xc_dom_arch for hvm-3.0-x86_32 guests')

> Signed-off-by: Andrew Cooper 

Reviewed-by: Roger Pau Monné 

I would add a note that this also affects the HVM initial CPU state
(albeit that is not part of an ABI).

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Jan Beulich
On 10.02.2020 10:45, Julien Grall wrote:
> Please suggest a new name for BIT_WORD() and we can repurpose it. So 
> far, I have no idea how to rename it.

_BIT_WORD() if you/we were to accept the name space violation, or
BITMAP_WORD()?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 146825: trouble: blocked/broken

2020-02-10 Thread osstest service owner
flight 146825 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146825/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-armhf  broken
 build-arm64-xsm  broken
 build-amd64   4 host-install(4)broken REGR. vs. 146767
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 146767
 build-armhf   4 host-install(4)broken REGR. vs. 146767

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a

version targeted for testing:
 xen  69da7d5440c609c57c5bba9a73b91c62ba2852e6
baseline version:
 xen  72dbcf0c065037dddb591a072c4f8f16fe888ea8

Last test of basis   146767  2020-02-06 17:01:03 Z3 days
Testing same since   146806  2020-02-08 13:00:53 Z1 days8 attempts


People who touched revisions under test:
  Andrew Cooper 
  Jan Beulich 
  Jeff Kubascik 
  Julien Grall 
  Julien Grall 
  Wei Liu 

jobs:
 build-arm64-xsm  broken  
 build-amd64  broken  
 build-armhf  broken  
 build-amd64-libvirt  blocked 
 test-armhf-armhf-xl  blocked 
 test-arm64-arm64-xl-xsm  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64blocked 
 test-amd64-amd64-libvirt blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-armhf broken
broken-job build-arm64-xsm broken
broken-step build-amd64 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-armhf host-install(4)

Not pushing.


commit 69da7d5440c609c57c5bba9a73b91c62ba2852e6
Author: Jeff Kubascik 
Date:   Tue Feb 4 14:51:50 2020 -0500

xen/arm: Handle unimplemented VGICv3 registers as RAZ/WI

Per the ARM Generic Interrupt Controller Architecture Specification (ARM
IHI 0069E), reserved registers should generally be treated as RAZ/WI.
To simplify the VGICv3 design and improve guest compatibility, treat the
default case for GICD and GICR registers as read_as_zero/write_ignore.

Signed-off-by: Jeff Kubascik 
Acked-by: Julien Grall 

commit cbd1a54f6dea3f4a7feed51e189ebae50ac9dd15
Author: Julien Grall 
Date:   Tue Feb 4 16:53:49 2020 +

xen/include: Fix typoes in asm-x86/domain.h

Signed-off-by: Julien Grall 
Acked-by: Andrew Cooper 

commit 24ea7abcdc8654ad2d9831a79c6d6f580aca6a3c
Author: Julien Grall 
Date:   Thu Feb 6 15:41:18 2020 +

xen/include: public: Document the padding in struct xen_hvm_param

There is an implicit padding of 2 bytes in struct xen_hvm_param between
the field domid and index. Make it explicit by introduce a padding
field. This can also serve as documentation.

Note that I don't think we can mandate it to be zero because a guest may
not have initialized the padding.

Signed-off-by: Julien Grall 
Acked-by: Jan Beulich 
Acked-by: Wei Liu 
(qemu changes not included)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Julien Grall

Hi Jan,

On 10/02/2020 09:31, Jan Beulich wrote:

On 10.02.2020 10:20, Julien Grall wrote:

Hi Jan,

On 10/02/2020 08:43, Jan Beulich wrote:

On 08.02.2020 15:37, Julien Grall wrote:



On 05/02/2020 13:27, Jan Beulich wrote:

On 05.02.2020 14:21, Roger Pau Monné wrote:

On Wed, Feb 05, 2020 at 09:46:25AM +0100, Jan Beulich wrote:

On 04.02.2020 18:34, Roger Pau Monne wrote:

Import the functions and it's dependencies. Based on Linux 5.5, commit
id d5226fa6dbae0569ee43ecfc08bdcd6770fc4755.

Signed-off-by: Roger Pau Monné 


Thanks for going this route; two remarks / requests:


--- a/xen/common/bitmap.c
+++ b/xen/common/bitmap.c
@@ -212,6 +212,47 @@ int __bitmap_weight(const unsigned long *bitmap, int bits)
#endif
EXPORT_SYMBOL(__bitmap_weight);

+void __bitmap_set(unsigned long *map, unsigned int start, int len)

+{
+   unsigned long *p = map + BIT_WORD(start);
+   const unsigned int size = start + len;
+   int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
+   unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
+
+   while (len - bits_to_set >= 0) {
+   *p |= mask_to_set;
+   len -= bits_to_set;
+   bits_to_set = BITS_PER_LONG;
+   mask_to_set = ~0UL;
+   p++;
+   }
+   if (len) {
+   mask_to_set &= BITMAP_LAST_WORD_MASK(size);
+   *p |= mask_to_set;
+   }
+}
+EXPORT_SYMBOL(__bitmap_set);
+
+void __bitmap_clear(unsigned long *map, unsigned int start, int len)
+{
+   unsigned long *p = map + BIT_WORD(start);
+   const unsigned int size = start + len;
+   int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
+   unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
+
+   while (len - bits_to_clear >= 0) {
+   *p &= ~mask_to_clear;
+   len -= bits_to_clear;
+   bits_to_clear = BITS_PER_LONG;
+   mask_to_clear = ~0UL;
+   p++;
+   }
+   if (len) {
+   mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
+   *p &= ~mask_to_clear;
+   }
+}
+EXPORT_SYMBOL(__bitmap_clear);


Despite all the other EXPORT_SYMBOL() in this file, personally I
would suggest to refrain from adding more. But I'm not going to
insist (until such time that they all get cleaned up).


--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -480,4 +480,6 @@ static inline int fls(unsigned int x)
#define hweight16(x) generic_hweight16(x)
#define hweight8(x) generic_hweight8(x)

+#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)


At first I thought - why for x86 only? Then I noticed Arm has an
almost identical #define already. Which in turn made me look at
Linux, where that #define lives in a common header. I think you
want to move the Arm one. Or wait, no - Arm's isn't even
compatible with the implementations of the functions you add.
This definitely needs taking care of, perhaps by way of ignoring
my request to go this route (as getting too involved).


Urg, yes, I didn't realize that BIT_WORD on ARM is only meant to be
used when the bitmap is mapped to an array of 32bit type elements.

I could introduce BIT_LONG that would have the same definition on Arm
and x86, and then modify the imported functions to use it, but IMO the
right solution would be to change the Arm BIT_WORD macro to also use
BITS_PER_LONG (and adjust the callers).


So do I. Julien, Stefano?


BIT_WORD used to use BITS_PER_LONG but this was changed in commit:

commit cd338e967c598bf747b03dcfd9d8d45dc40bac1a
Author: Ian Campbell 
Date:   Thu May 8 16:13:55 2014 +0100

   xen: arm: bitops take unsigned int

   Xen bitmaps can be 4 rather than 8 byte aligned, so use the
appropriate type.
   Otherwise the compiler can generate unaligned 8 byte accesses and
cause traps.

   Signed-off-by: Ian Campbell 
   Acked-by: Stefano Stabellini 

On 64-bit Arm, while we allow unaligned access, the atomic operations
still enforce alignment.

On 32-bit Arm, there are no unaligned access allowed. However,  the
change of BIT_WORD is not a concern for 32-bit.

I haven't check whether we still have places where bitops are used with
4 byte aligned memory. However, as the bitops take a void * in
parameter, there are no promise on the alignment.


I'm pretty sure for x86 the 32-bit guest compat code uses such, at
the very least.


I have spent some times looking at it and noticed, there are some in the
common code (e.g scheduler, IRQ...).




Therefore, we can't rewrite BIT_WORD without addressing the underlying
issues. Introducing BIT_LONG is probably the easiest way at the moment.


Which would make use (continue to) deviate from Linux'es meaning of
BIT_WORD().


This would not be really the first time we deviate from Linux...


Of course. And there've been quite a few cases where I've argued
towards deviation. It's just that iirc you're one of those who
prefer less deviation, so I've been 

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Jan Beulich
On 10.02.2020 10:20, Julien Grall wrote:
> Hi Jan,
> 
> On 10/02/2020 08:43, Jan Beulich wrote:
>> On 08.02.2020 15:37, Julien Grall wrote:
>>>
>>>
>>> On 05/02/2020 13:27, Jan Beulich wrote:
 On 05.02.2020 14:21, Roger Pau Monné wrote:
> On Wed, Feb 05, 2020 at 09:46:25AM +0100, Jan Beulich wrote:
>> On 04.02.2020 18:34, Roger Pau Monne wrote:
>>> Import the functions and it's dependencies. Based on Linux 5.5, commit
>>> id d5226fa6dbae0569ee43ecfc08bdcd6770fc4755.
>>>
>>> Signed-off-by: Roger Pau Monné 
>>
>> Thanks for going this route; two remarks / requests:
>>
>>> --- a/xen/common/bitmap.c
>>> +++ b/xen/common/bitmap.c
>>> @@ -212,6 +212,47 @@ int __bitmap_weight(const unsigned long *bitmap, 
>>> int bits)
>>>#endif
>>>EXPORT_SYMBOL(__bitmap_weight);
>>>
>>> +void __bitmap_set(unsigned long *map, unsigned int start, int len)
>>> +{
>>> +   unsigned long *p = map + BIT_WORD(start);
>>> +   const unsigned int size = start + len;
>>> +   int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
>>> +   unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
>>> +
>>> +   while (len - bits_to_set >= 0) {
>>> +   *p |= mask_to_set;
>>> +   len -= bits_to_set;
>>> +   bits_to_set = BITS_PER_LONG;
>>> +   mask_to_set = ~0UL;
>>> +   p++;
>>> +   }
>>> +   if (len) {
>>> +   mask_to_set &= BITMAP_LAST_WORD_MASK(size);
>>> +   *p |= mask_to_set;
>>> +   }
>>> +}
>>> +EXPORT_SYMBOL(__bitmap_set);
>>> +
>>> +void __bitmap_clear(unsigned long *map, unsigned int start, int len)
>>> +{
>>> +   unsigned long *p = map + BIT_WORD(start);
>>> +   const unsigned int size = start + len;
>>> +   int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
>>> +   unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
>>> +
>>> +   while (len - bits_to_clear >= 0) {
>>> +   *p &= ~mask_to_clear;
>>> +   len -= bits_to_clear;
>>> +   bits_to_clear = BITS_PER_LONG;
>>> +   mask_to_clear = ~0UL;
>>> +   p++;
>>> +   }
>>> +   if (len) {
>>> +   mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
>>> +   *p &= ~mask_to_clear;
>>> +   }
>>> +}
>>> +EXPORT_SYMBOL(__bitmap_clear);
>>
>> Despite all the other EXPORT_SYMBOL() in this file, personally I
>> would suggest to refrain from adding more. But I'm not going to
>> insist (until such time that they all get cleaned up).
>>
>>> --- a/xen/include/asm-x86/bitops.h
>>> +++ b/xen/include/asm-x86/bitops.h
>>> @@ -480,4 +480,6 @@ static inline int fls(unsigned int x)
>>>#define hweight16(x) generic_hweight16(x)
>>>#define hweight8(x) generic_hweight8(x)
>>>
>>> +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
>>
>> At first I thought - why for x86 only? Then I noticed Arm has an
>> almost identical #define already. Which in turn made me look at
>> Linux, where that #define lives in a common header. I think you
>> want to move the Arm one. Or wait, no - Arm's isn't even
>> compatible with the implementations of the functions you add.
>> This definitely needs taking care of, perhaps by way of ignoring
>> my request to go this route (as getting too involved).
>
> Urg, yes, I didn't realize that BIT_WORD on ARM is only meant to be
> used when the bitmap is mapped to an array of 32bit type elements.
>
> I could introduce BIT_LONG that would have the same definition on Arm
> and x86, and then modify the imported functions to use it, but IMO the
> right solution would be to change the Arm BIT_WORD macro to also use
> BITS_PER_LONG (and adjust the callers).

 So do I. Julien, Stefano?
>>>
>>> BIT_WORD used to use BITS_PER_LONG but this was changed in commit:
>>>
>>> commit cd338e967c598bf747b03dcfd9d8d45dc40bac1a
>>> Author: Ian Campbell 
>>> Date:   Thu May 8 16:13:55 2014 +0100
>>>
>>>   xen: arm: bitops take unsigned int
>>>
>>>   Xen bitmaps can be 4 rather than 8 byte aligned, so use the
>>> appropriate type.
>>>   Otherwise the compiler can generate unaligned 8 byte accesses and
>>> cause traps.
>>>
>>>   Signed-off-by: Ian Campbell 
>>>   Acked-by: Stefano Stabellini 
>>>
>>> On 64-bit Arm, while we allow unaligned access, the atomic operations
>>> still enforce alignment.
>>>
>>> On 32-bit Arm, there are no unaligned access allowed. However,  the
>>> change of BIT_WORD is not a concern for 32-bit.
>>>
>>> I haven't check whether we still have places where bitops are used with
>>> 4 byte aligned memory. However, as the bitops take a void * in
>>> parameter, there are 

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Julien Grall

Hi Jan,

On 10/02/2020 08:43, Jan Beulich wrote:

On 08.02.2020 15:37, Julien Grall wrote:



On 05/02/2020 13:27, Jan Beulich wrote:

On 05.02.2020 14:21, Roger Pau Monné wrote:

On Wed, Feb 05, 2020 at 09:46:25AM +0100, Jan Beulich wrote:

On 04.02.2020 18:34, Roger Pau Monne wrote:

Import the functions and it's dependencies. Based on Linux 5.5, commit
id d5226fa6dbae0569ee43ecfc08bdcd6770fc4755.

Signed-off-by: Roger Pau Monné 


Thanks for going this route; two remarks / requests:


--- a/xen/common/bitmap.c
+++ b/xen/common/bitmap.c
@@ -212,6 +212,47 @@ int __bitmap_weight(const unsigned long *bitmap, int bits)
   #endif
   EXPORT_SYMBOL(__bitmap_weight);
   
+void __bitmap_set(unsigned long *map, unsigned int start, int len)

+{
+   unsigned long *p = map + BIT_WORD(start);
+   const unsigned int size = start + len;
+   int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
+   unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
+
+   while (len - bits_to_set >= 0) {
+   *p |= mask_to_set;
+   len -= bits_to_set;
+   bits_to_set = BITS_PER_LONG;
+   mask_to_set = ~0UL;
+   p++;
+   }
+   if (len) {
+   mask_to_set &= BITMAP_LAST_WORD_MASK(size);
+   *p |= mask_to_set;
+   }
+}
+EXPORT_SYMBOL(__bitmap_set);
+
+void __bitmap_clear(unsigned long *map, unsigned int start, int len)
+{
+   unsigned long *p = map + BIT_WORD(start);
+   const unsigned int size = start + len;
+   int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
+   unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
+
+   while (len - bits_to_clear >= 0) {
+   *p &= ~mask_to_clear;
+   len -= bits_to_clear;
+   bits_to_clear = BITS_PER_LONG;
+   mask_to_clear = ~0UL;
+   p++;
+   }
+   if (len) {
+   mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
+   *p &= ~mask_to_clear;
+   }
+}
+EXPORT_SYMBOL(__bitmap_clear);


Despite all the other EXPORT_SYMBOL() in this file, personally I
would suggest to refrain from adding more. But I'm not going to
insist (until such time that they all get cleaned up).


--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -480,4 +480,6 @@ static inline int fls(unsigned int x)
   #define hweight16(x) generic_hweight16(x)
   #define hweight8(x) generic_hweight8(x)
   
+#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)


At first I thought - why for x86 only? Then I noticed Arm has an
almost identical #define already. Which in turn made me look at
Linux, where that #define lives in a common header. I think you
want to move the Arm one. Or wait, no - Arm's isn't even
compatible with the implementations of the functions you add.
This definitely needs taking care of, perhaps by way of ignoring
my request to go this route (as getting too involved).


Urg, yes, I didn't realize that BIT_WORD on ARM is only meant to be
used when the bitmap is mapped to an array of 32bit type elements.

I could introduce BIT_LONG that would have the same definition on Arm
and x86, and then modify the imported functions to use it, but IMO the
right solution would be to change the Arm BIT_WORD macro to also use
BITS_PER_LONG (and adjust the callers).


So do I. Julien, Stefano?


BIT_WORD used to use BITS_PER_LONG but this was changed in commit:

commit cd338e967c598bf747b03dcfd9d8d45dc40bac1a
Author: Ian Campbell 
Date:   Thu May 8 16:13:55 2014 +0100

  xen: arm: bitops take unsigned int

  Xen bitmaps can be 4 rather than 8 byte aligned, so use the
appropriate type.
  Otherwise the compiler can generate unaligned 8 byte accesses and
cause traps.

  Signed-off-by: Ian Campbell 
  Acked-by: Stefano Stabellini 

On 64-bit Arm, while we allow unaligned access, the atomic operations
still enforce alignment.

On 32-bit Arm, there are no unaligned access allowed. However,  the
change of BIT_WORD is not a concern for 32-bit.

I haven't check whether we still have places where bitops are used with
4 byte aligned memory. However, as the bitops take a void * in
parameter, there are no promise on the alignment.


I'm pretty sure for x86 the 32-bit guest compat code uses such, at
the very least.


I have spent some times looking at it and noticed, there are some in the 
common code (e.g scheduler, IRQ...).





Therefore, we can't rewrite BIT_WORD without addressing the underlying
issues. Introducing BIT_LONG is probably the easiest way at the moment.


Which would make use (continue to) deviate from Linux'es meaning of
BIT_WORD().


This would not be really the first time we deviate from Linux... We 
could possibly rename BIT_WORD to something different, but I don't have 
a good name, hence why I think BIT_LONG is the best solution so far.





However, our bitops really ought to specify the alignment in parameter
to avoid such issues arising.

I would be in 

Re: [Xen-devel] Xen fails to resume on AMD Fam15h (and Fam17h?) because of CPUID mismatch

2020-02-10 Thread Jan Beulich
On 10.02.2020 00:06, Marek Marczykowski-Górecki wrote:
> Hi,
> 
> Multiple Qubes users have reported issues with resuming from S3 on AMD
> systems (Ryzen 2500U, Ryzen Pro 3700U, maybe more). The error message
> is:
> 
> (XEN) CPU0: cap[ 1] is 7ed8320b (expected f6d8320b)
> 
> If I read it right, this is:
>   - OSXSAVE: 0 -> 1
>   - HYPERVISOR: 1 -> 0
> 
> Commenting out the panic on a failed recheck_cpu_features() in power.c
> makes the system work after resume, reportedly stable. But that doesn't
> sounds like a good idea generally.
> 
> Is this difference a Xen fault (some missing MSR / other register
> restore on resume)? Or BIOS vendor / AMD, that could be worked around in
> Xen?

The transition of the HYPERVISOR bit is definitely a Xen issue,
with Andrew having sent a patch already (iirc). The OSXSAVE part
is a little more surprising, as I wouldn't think behavior there
should differ between Intel and AMD. I take your report though
to imply that you see the issue on AMD systems only? (You also
don't mention the Xen version, which may matter as there may
have been ordering changes of when the full original CR4 value
gets restored.)

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 2/3] bitmap: import bitmap_{set/clear} from Linux 5.5

2020-02-10 Thread Jan Beulich
On 08.02.2020 15:37, Julien Grall wrote:
> 
> 
> On 05/02/2020 13:27, Jan Beulich wrote:
>> On 05.02.2020 14:21, Roger Pau Monné wrote:
>>> On Wed, Feb 05, 2020 at 09:46:25AM +0100, Jan Beulich wrote:
 On 04.02.2020 18:34, Roger Pau Monne wrote:
> Import the functions and it's dependencies. Based on Linux 5.5, commit
> id d5226fa6dbae0569ee43ecfc08bdcd6770fc4755.
>
> Signed-off-by: Roger Pau Monné 

 Thanks for going this route; two remarks / requests:

> --- a/xen/common/bitmap.c
> +++ b/xen/common/bitmap.c
> @@ -212,6 +212,47 @@ int __bitmap_weight(const unsigned long *bitmap, int 
> bits)
>   #endif
>   EXPORT_SYMBOL(__bitmap_weight);
>   
> +void __bitmap_set(unsigned long *map, unsigned int start, int len)
> +{
> + unsigned long *p = map + BIT_WORD(start);
> + const unsigned int size = start + len;
> + int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
> + unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
> +
> + while (len - bits_to_set >= 0) {
> + *p |= mask_to_set;
> + len -= bits_to_set;
> + bits_to_set = BITS_PER_LONG;
> + mask_to_set = ~0UL;
> + p++;
> + }
> + if (len) {
> + mask_to_set &= BITMAP_LAST_WORD_MASK(size);
> + *p |= mask_to_set;
> + }
> +}
> +EXPORT_SYMBOL(__bitmap_set);
> +
> +void __bitmap_clear(unsigned long *map, unsigned int start, int len)
> +{
> + unsigned long *p = map + BIT_WORD(start);
> + const unsigned int size = start + len;
> + int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
> + unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
> +
> + while (len - bits_to_clear >= 0) {
> + *p &= ~mask_to_clear;
> + len -= bits_to_clear;
> + bits_to_clear = BITS_PER_LONG;
> + mask_to_clear = ~0UL;
> + p++;
> + }
> + if (len) {
> + mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
> + *p &= ~mask_to_clear;
> + }
> +}
> +EXPORT_SYMBOL(__bitmap_clear);

 Despite all the other EXPORT_SYMBOL() in this file, personally I
 would suggest to refrain from adding more. But I'm not going to
 insist (until such time that they all get cleaned up).

> --- a/xen/include/asm-x86/bitops.h
> +++ b/xen/include/asm-x86/bitops.h
> @@ -480,4 +480,6 @@ static inline int fls(unsigned int x)
>   #define hweight16(x) generic_hweight16(x)
>   #define hweight8(x) generic_hweight8(x)
>   
> +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)

 At first I thought - why for x86 only? Then I noticed Arm has an
 almost identical #define already. Which in turn made me look at
 Linux, where that #define lives in a common header. I think you
 want to move the Arm one. Or wait, no - Arm's isn't even
 compatible with the implementations of the functions you add.
 This definitely needs taking care of, perhaps by way of ignoring
 my request to go this route (as getting too involved).
>>>
>>> Urg, yes, I didn't realize that BIT_WORD on ARM is only meant to be
>>> used when the bitmap is mapped to an array of 32bit type elements.
>>>
>>> I could introduce BIT_LONG that would have the same definition on Arm
>>> and x86, and then modify the imported functions to use it, but IMO the
>>> right solution would be to change the Arm BIT_WORD macro to also use
>>> BITS_PER_LONG (and adjust the callers).
>>
>> So do I. Julien, Stefano?
> 
> BIT_WORD used to use BITS_PER_LONG but this was changed in commit:
> 
> commit cd338e967c598bf747b03dcfd9d8d45dc40bac1a
> Author: Ian Campbell 
> Date:   Thu May 8 16:13:55 2014 +0100
> 
>  xen: arm: bitops take unsigned int
> 
>  Xen bitmaps can be 4 rather than 8 byte aligned, so use the 
> appropriate type.
>  Otherwise the compiler can generate unaligned 8 byte accesses and 
> cause traps.
> 
>  Signed-off-by: Ian Campbell 
>  Acked-by: Stefano Stabellini 
> 
> On 64-bit Arm, while we allow unaligned access, the atomic operations 
> still enforce alignment.
> 
> On 32-bit Arm, there are no unaligned access allowed. However,  the 
> change of BIT_WORD is not a concern for 32-bit.
> 
> I haven't check whether we still have places where bitops are used with 
> 4 byte aligned memory. However, as the bitops take a void * in 
> parameter, there are no promise on the alignment.

I'm pretty sure for x86 the 32-bit guest compat code uses such, at
the very least.

> Therefore, we can't rewrite BIT_WORD without addressing the underlying 
> issues. Introducing BIT_LONG is probably the easiest way at the moment.

Which would make use (continue to) deviate from Linux'es meaning of
BIT_WORD().

> However, our bitops really ought to specify the alignment in parameter 
> to avoid such issues arising.
> 
> I would be in favor of using unsigned long *.

I