[qemu-mainline test] 156646: regressions - FAIL

2020-11-11 Thread osstest service owner
flight 156646 qemu-mainline real [real]
flight 156704 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156646/
http://logs.test-lab.xenproject.org/osstest/logs/156704/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start  fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd  12 debian-di-installfail REGR. vs. 152631
 test-amd64-amd64-xl-xsm  20 guest-localmigrate/x10   fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-installfail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt 14 guest-start  fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 152631
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeatfail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu879860ca706fa1ef47ba511c49a6e2b1b49be9b7
baseline version:
 qemuu1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   83 days
Failing since152659  2020-08-21 14:07:39 Z   82 days  178 attempts
Testing same since   156646  2020-11-10 20:59:15 Z1 days1 attempts


People who touched revisions under test:
Aaron Lindsay 
  Alberto Garcia 
  Aleksandar Markovic 
  Alex Bennée 
  Alex Chen 
  Alex Williamson 
  Alexander Bulekov 
  AlexChen 
  Alexey Kirillov 
  Alistair Francis 
  Alistair Francis 
  Amey Narkhede 
  Ana 

[libvirt test] 156671: regressions - FAIL

2020-11-11 Thread osstest service owner
flight 156671 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 151777
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  d44a8203e7d3be30daf6a95c1fcbe67e38e8b475
baseline version:
 libvirt  2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  125 days
Failing since151818  2020-07-11 04:18:52 Z  124 days  118 attempts
Testing same since   156671  2020-11-11 04:19:17 Z1 days1 attempts


People who touched revisions under test:
  Adolfo Jayme Barrientos 
  Aleksandr Alekseev 
  Andika Triwidada 
  Andrea Bolognani 
  Balázs Meskó 
  Bastien Orivel 
  Bihong Yu 
  Binfeng Wu 
  Boris Fiuczynski 
  Brian Turek 
  Christian Ehrhardt 
  Christian Schoenebeck 
  Cole Robinson 
  Collin Walling 
  Cornelia Huck 
  Côme Borsoi 
  Daniel Henrique Barboza 
  Daniel Letai 
  Daniel P. Berrange 
  Daniel P. Berrangé 
  Erik Skultety 
  Fabian Freyer 
  Fangge Jin 
  Fedora Weblate Translation 
  Göran Uddeborg 
  Halil Pasic 
  Han Han 
  Hao Wang 
  Ian Wienand 
  Jamie Strandboge 
  Jamie Strandboge 
  Jean-Baptiste Holcroft 
  Jianan Gao 
  Jim Fehlig 
  Jin Yan 
  Jiri Denemark 
  Jonathon Jongsma 
  Julio Faracco 
  Ján Tomko 
  Kashyap Chamarthy 
  Kevin Locke 
  Laine Stump 
  Liao Pingfang 
  Lin Ma 
  Lin Ma 
  Lin Ma 
  Marc Hartmayer 
  Marek Marczykowski-Górecki 
  Markus Schade 
  Martin Kletzander 
  Masayoshi Mizuma 
  Matt Coleman 
  Matt Coleman 
  Mauro Matteo Cascella 
  Michal Privoznik 
  Michał Smyk 
  Milo Casagrande 
  Neal Gompa 
  Nico Pache 
  Nikolay Shirokovskiy 
  Olesya Gerasimenko 
  Orion Poplawski 
  Patrick Magauran 
  Paulo de Rezende Pinatti 
  Pavel Hrdina 
  Peter Krempa 
  Pino Toscano 
  Pino Toscano 
  Piotr Drąg 
  Prathamesh Chavan 
  Roman Bogorodskiy 
  Roman Bolshakov 
  Ryan Schmidt 
  Sam Hartman 
  Scott Shambarger 
  Sebastian Mitterle 
  Simon Gaiser 
  Stefan Bader 
  Stefan Berger 
  Szymon Scholz 
  Thomas Huth 
  Tim Wiederhake 
  Tomáš Golembiovský 
  Wang Xin 
  Weblate 
  Yang Hang 
  Yanqiu Zhang 
  Yi Li 
  Yi Wang 
  Yuri Chornoivan 
  Zheng Chuan 
  zhenwei pi 
  Zhenyu Zheng 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-arm64-libvirt  fail
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-libvirt-xsm   

[xen-unstable-smoke test] 156689: tolerable all pass - PUSHED

2020-11-11 Thread osstest service owner
flight 156689 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156689/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen  69634224afaf84474f04e1ab050f216d66bcda68

Last test of basis   156679  2020-11-11 08:00:30 Z0 days
Testing same since   156689  2020-11-11 22:01:23 Z0 days1 attempts


People who touched revisions under test:
  Julien Grall 
  Penny Zheng 
  Stefano Stabellini 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   69634224af..5505f5f8e7  5505f5f8e7e805365cfe70b6a4af6115940bb749 -> smoke



[xen-4.13-testing test] 156636: tolerable FAIL - PUSHED

2020-11-11 Thread osstest service owner
flight 156636 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156636/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 156593
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 156593
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 156593
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 156593
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156593
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 156593
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 156593
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 156593
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 156593
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 156593
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 156593
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass

version targeted for testing:
 xen  d4c0483c0b87768cd9b95542e98111e4c098d57f
baseline version:
 xen  971a9d14667448427ddea1cf15fd7fd409205c58

Last test of basis   156593  2020-11-09 18:08:08 Z2 days
Testing same since   156636  2020-11-10 18:06:32 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Julien Grall 
  Roger Pau Monné 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf   

[xen-4.12-testing test] 156635: tolerable FAIL - PUSHED

2020-11-11 Thread osstest service owner
flight 156635 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156635/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow219 guest-localmigrate/x10   fail  like 156592
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 156592
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 156592
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 156592
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 156592
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 156592
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 156592
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 156592
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 156592
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 156592
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 156592
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156592
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass

version targeted for testing:
 xen  14c9c0fceae92a18dedc3f280ebf8b9f52e39de5
baseline version:
 xen  46ad8841ac4da8fc2a128e49b8406966bf5d3601

Last test of basis   156592  2020-11-09 18:08:09 Z2 days
Testing same since   156635  2020-11-10 18:06:22 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Julien Grall 
  Roger Pau Monné 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 

[RFC PATCH v2 15/15] xen/arm: remove dependency on gcc built-in __sync_fetch_and_add()

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

Now that we have explicit implementations of LL/SC and LSE atomics
helpers after porting Linux's versions to Xen, we can drop the reference
to gcc's built-in __sync_fetch_and_add().

This requires some fudging using container_of() because the users of
__sync_fetch_and_add(), namely xen/spinlock.c, expect the ptr to be
directly to the u32 being modified while the atomics helpers expect the
ptr to be to an atomic_t and then access that atomic_t's counter member.

By using container_of() we can create a "fake" (atomic_t *) pointer and
pass that to the atomic_fetch_add() that we ported from Linux.

NOTE: spinlock.c is using u32 for the value being added while the atomics
helpers use int for their counter member. This shouldn't actually matter
because we do the addition in assembly and the compiler isn't smart enough
to detect potential signed integer overflow in inline assembly, but I
thought it worth calling out in the commit message.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/system.h | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 65d5c8e423..0326e3ade4 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -58,7 +58,14 @@ static inline int local_abort_is_enabled(void)
 return !(flags & PSR_ABT_MASK);
 }
 
-#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v)
+#define arch_fetch_and_add(ptr, x) ({   \
+int ret;\
+\
+atomic_t * tmp = container_of((int *)(&(x)), atomic_t, counter);\
+ret = atomic_fetch_add(x, tmp); \
+\
+ret;\
+})
 
 extern struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next);
 
-- 
2.24.3 (Apple Git-128)




[RFC PATCH v2 11/15] xen/arm64: port Linux's arm64 atomic.h to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

 - Drop atomic64_t helper declarations as we don't currently have an
   atomic64_t in Xen.

 - Drop arch_* prefixes.

 - Swap include of  to just .

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm64/atomic.h | 256 -
 1 file changed, 73 insertions(+), 183 deletions(-)

diff --git a/xen/include/asm-arm/arm64/atomic.h 
b/xen/include/asm-arm/arm64/atomic.h
index a2eab9f091..b695cc6e09 100644
--- a/xen/include/asm-arm/arm64/atomic.h
+++ b/xen/include/asm-arm/arm64/atomic.h
@@ -1,23 +1,23 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
+
 /*
- * Based on arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 1996 Russell King.
  * Copyright (C) 2002 Deep Blue Solutions Ltd.
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ASM_ATOMIC_H
-#define __ASM_ATOMIC_H
+#ifndef __ASM_ARM_ARM64_ATOMIC_H
+#define __ASM_ARM_ARM64_ATOMIC_H
 
-#include 
-#include 
+#include 
+#include 
 
-#include 
-#include 
-#include 
+#include "lse.h"
+#include "cmpxchg.h"
 
 #define ATOMIC_OP(op)  \
-static inline void arch_##op(int i, atomic_t *v)   \
+static inline void op(int i, atomic_t *v)  \
 {  \
__lse_ll_sc_body(op, i, v); \
 }
@@ -32,7 +32,7 @@ ATOMIC_OP(atomic_sub)
 #undef ATOMIC_OP
 
 #define ATOMIC_FETCH_OP(name, op)  \
-static inline int arch_##op##name(int i, atomic_t *v)  \
+static inline int op##name(int i, atomic_t *v) \
 {  \
return __lse_ll_sc_body(op##name, i, v);\
 }
@@ -54,175 +54,65 @@ ATOMIC_FETCH_OPS(atomic_sub_return)
 
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_FETCH_OPS
-
-#define ATOMIC64_OP(op)
\
-static inline void arch_##op(long i, atomic64_t *v)\
-{  \
-   __lse_ll_sc_body(op, i, v); \
-}
-
-ATOMIC64_OP(atomic64_andnot)
-ATOMIC64_OP(atomic64_or)
-ATOMIC64_OP(atomic64_xor)
-ATOMIC64_OP(atomic64_add)
-ATOMIC64_OP(atomic64_and)
-ATOMIC64_OP(atomic64_sub)
-
-#undef ATOMIC64_OP
-
-#define ATOMIC64_FETCH_OP(name, op)\
-static inline long arch_##op##name(long i, atomic64_t *v)  \
-{  \
-   return __lse_ll_sc_body(op##name, i, v);\
-}
-
-#define ATOMIC64_FETCH_OPS(op) \
-   ATOMIC64_FETCH_OP(_relaxed, op) \
-   ATOMIC64_FETCH_OP(_acquire, op) \
-   ATOMIC64_FETCH_OP(_release, op) \
-   ATOMIC64_FETCH_OP(, op)
-
-ATOMIC64_FETCH_OPS(atomic64_fetch_andnot)
-ATOMIC64_FETCH_OPS(atomic64_fetch_or)
-ATOMIC64_FETCH_OPS(atomic64_fetch_xor)
-ATOMIC64_FETCH_OPS(atomic64_fetch_add)
-ATOMIC64_FETCH_OPS(atomic64_fetch_and)
-ATOMIC64_FETCH_OPS(atomic64_fetch_sub)
-ATOMIC64_FETCH_OPS(atomic64_add_return)
-ATOMIC64_FETCH_OPS(atomic64_sub_return)
-
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_FETCH_OPS
-
-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
-{
-   return __lse_ll_sc_body(atomic64_dec_if_positive, v);
-}
-
-#define arch_atomic_read(v)__READ_ONCE((v)->counter)
-#define arch_atomic_set(v, i)  __WRITE_ONCE(((v)->counter), 
(i))
-
-#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed
-#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire
-#define arch_atomic_add_return_release arch_atomic_add_return_release
-#define arch_atomic_add_return arch_atomic_add_return
-
-#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
-#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire
-#define arch_atomic_sub_return_release arch_atomic_sub_return_release
-#define arch_atomic_sub_return arch_atomic_sub_return
-
-#define arch_atomic_fetch_add_relaxed  arch_atomic_fetch_add_relaxed
-#define arch_atomic_fetch_add_acquire  arch_atomic_fetch_add_acquire
-#define arch_atomic_fetch_add_release  arch_atomic_fetch_add_release
-#define arch_atomic_fetch_add  arch_atomic_fetch_add
-
-#define arch_atomic_fetch_sub_relaxed  arch_atomic_fetch_sub_relaxed
-#define arch_atomic_fetch_sub_acquire  arch_atomic_fetch_sub_acquire
-#define arch_atomic_fetch_sub_release  arch_atomic_fetch_sub_release
-#define 

[RFC PATCH v2 12/15] xen/arm64: port Linux's arm64 lse.h to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

This just involves making system_uses_lse_atomics() call cpus_have_cap()
instead of directly looking up in cpu_hwcap_keys.

Not 100% sure whether this is a valid transformation until I do a run
test.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm64/lse.h | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/xen/include/asm-arm/arm64/lse.h b/xen/include/asm-arm/arm64/lse.h
index 704be3e4e4..847727f219 100644
--- a/xen/include/asm-arm/arm64/lse.h
+++ b/xen/include/asm-arm/arm64/lse.h
@@ -1,28 +1,28 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __ASM_LSE_H
-#define __ASM_LSE_H
+/*
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+#ifndef __ASM_ARM_ARM64_LSE_H
+#define __ASM_ARM_ARM64_LSE_H
 
-#include 
+#include "atomic_ll_sc.h"
 
 #ifdef CONFIG_ARM64_LSE_ATOMICS
 
 #define __LSE_PREAMBLE ".arch_extension lse\n"
 
-#include 
-#include 
-#include 
-#include 
+#include 
+#include 
+#include 
+
 #include 
-#include 
-#include 
 
-extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
-extern struct static_key_false arm64_const_caps_ready;
+#include "atomic_lse.h"
 
 static inline bool system_uses_lse_atomics(void)
 {
-   return (static_branch_likely(_const_caps_ready)) &&
-   static_branch_likely(_hwcap_keys[ARM64_HAS_LSE_ATOMICS]);
+   return cpus_have_cap(ARM64_HAS_LSE_ATOMICS);
 }
 
 #define __lse_ll_sc_body(op, ...)  \
@@ -45,4 +45,4 @@ static inline bool system_uses_lse_atomics(void) { return 
false; }
 #define ARM64_LSE_ATOMIC_INSN(llsc, lse)   llsc
 
 #endif /* CONFIG_ARM64_LSE_ATOMICS */
-#endif /* __ASM_LSE_H */
\ No newline at end of file
+#endif /* __ASM_ARM_ARM64_LSE_H */
\ No newline at end of file
-- 
2.24.3 (Apple Git-128)




[RFC PATCH v2 10/15] xen/arm64: port Linux's arm64 cmpxchg.h to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

 - s/arch_xchg/xchg/

 - s/arch_cmpxchg/cmpxchg/

 - Replace calls to BUILD_BUG() with calls to __bad_cmpxchg() as we
   don't currently have a BUILD_BUG() macro in Xen and this will
   equivalently cause a link-time error.

 - Replace calls to VM_BUG_ON() with BUG_ON() as we don't currently
   have a VM_BUG_ON() macro in Xen.

 - Pull in the timeout variants of cmpxchg from the original Xen
   arm64 cmpxchg.h as these are required for guest atomics and are
   not provided by Linux. Note these are always using LL/SC so we
   should ideally write LSE versions at some point.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm64/cmpxchg.h | 165 ++--
 1 file changed, 131 insertions(+), 34 deletions(-)

diff --git a/xen/include/asm-arm/arm64/cmpxchg.h 
b/xen/include/asm-arm/arm64/cmpxchg.h
index c51388216e..a5282cf66e 100644
--- a/xen/include/asm-arm/arm64/cmpxchg.h
+++ b/xen/include/asm-arm/arm64/cmpxchg.h
@@ -1,17 +1,16 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Based on arch/arm/include/asm/cmpxchg.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ASM_CMPXCHG_H
-#define __ASM_CMPXCHG_H
+#ifndef __ASM_ARM_ARM64_CMPXCHG_H
+#define __ASM_ARM_ARM64_CMPXCHG_H
 
-#include 
-#include 
+#include 
+#include "lse.h"
 
-#include 
-#include 
+extern unsigned long __bad_cmpxchg(volatile void *ptr, int size);
 
 /*
  * We need separate acquire parameters for ll/sc and lse, since the full
@@ -33,7 +32,9 @@ static inline u##sz __xchg_case_##name##sz(u##sz x, volatile 
void *ptr)   \
"   " #mb,  
\
/* LSE atomics */   
\
"   swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n"
\
-   __nops(3)   
\
+   "nop\n" \
+   "nop\n" \
+   "nop\n" \
"   " #nop_lse) 
\
: "=" (ret), "=" (tmp), "+Q" (*(u##sz *)ptr)
\
: "r" (x)   
\
@@ -62,7 +63,7 @@ __XCHG_CASE( ,  ,  mb_, 64, dmb ish, nop,  , a, l, "memory")
 #undef __XCHG_CASE
 
 #define __XCHG_GEN(sfx)
\
-static __always_inline  unsigned long __xchg##sfx(unsigned long x, \
+static always_inline  unsigned long __xchg##sfx(unsigned long x,   \
volatile void *ptr, \
int size)   \
 {  \
@@ -76,7 +77,7 @@ static __always_inline  unsigned long __xchg##sfx(unsigned 
long x,\
case 8: \
return __xchg_case##sfx##_64(x, ptr);   \
default:\
-   BUILD_BUG();\
+   return __bad_cmpxchg(ptr, size);
\
}   \
\
unreachable();  \
@@ -98,10 +99,10 @@ __XCHG_GEN(_mb)
 })
 
 /* xchg */
-#define arch_xchg_relaxed(...) __xchg_wrapper(, __VA_ARGS__)
-#define arch_xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__)
-#define arch_xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__)
-#define arch_xchg(...) __xchg_wrapper( _mb, __VA_ARGS__)
+#define xchg_relaxed(...)  __xchg_wrapper(, __VA_ARGS__)
+#define xchg_acquire(...)  __xchg_wrapper(_acq, __VA_ARGS__)
+#define xchg_release(...)  __xchg_wrapper(_rel, __VA_ARGS__)
+#define xchg(...)  __xchg_wrapper( _mb, __VA_ARGS__)
 
 #define __CMPXCHG_CASE(name, sz)   \
 static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr,  \
@@ -148,7 +149,7 @@ __CMPXCHG_DBL(_mb)
 #undef __CMPXCHG_DBL
 
 #define __CMPXCHG_GEN(sfx) \
-static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,
\
+static always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,  \
   unsigned long old,   \
   unsigned long new,   \
   int size)\
@@ -163,7 +164,7 @@ static 

[RFC PATCH v2 14/15] xen/arm32: port Linux's arm32 cmpxchg.h to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

 - Drop support for pre-Armv7 systems, including the workarounds for
   the swp instruction being broken on StrongARM.

 - Drop local variants as no callers in Xen.

 - Keep the compiler happy by fixing __cmpxchg64()'s ptr arg to be
   volatile, and casting ptr to be (const void *) in the call to
   prefetchw().

 - Add explicit strict variants of xchg(), cmpxchg(), and cmpxchg64(),
   as the Linux arm32 cmpxchg.h doesn't define these and they're needed
   for Xen. These strict variants are just wrappers that sandwich calls
   to the relaxed variants between two smp_mb()'s.

 - Pull in the timeout variants of cmpxchg from the original Xen
   arm32 cmpxchg.h as these are required for guest atomics and are
   not provided by Linux.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm32/cmpxchg.h | 322 
 1 file changed, 188 insertions(+), 134 deletions(-)

diff --git a/xen/include/asm-arm/arm32/cmpxchg.h 
b/xen/include/asm-arm/arm32/cmpxchg.h
index 638ae84afb..d7189984d0 100644
--- a/xen/include/asm-arm/arm32/cmpxchg.h
+++ b/xen/include/asm-arm/arm32/cmpxchg.h
@@ -1,46 +1,24 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __ASM_ARM_CMPXCHG_H
-#define __ASM_ARM_CMPXCHG_H
-
-#include 
-#include 
-#include 
-
-#if defined(CONFIG_CPU_SA1100) || defined(CONFIG_CPU_SA110)
 /*
- * On the StrongARM, "swp" is terminally broken since it bypasses the
- * cache totally.  This means that the cache becomes inconsistent, and,
- * since we use normal loads/stores as well, this is really bad.
- * Typically, this causes oopsen in filp_close, but could have other,
- * more disastrous effects.  There are two work-arounds:
- *  1. Disable interrupts and emulate the atomic swap
- *  2. Clean the cache, perform atomic swap, flush the cache
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
- * We choose (1) since its the "easiest" to achieve here and is not
- * dependent on the processor type.
- *
- * NOTE that this solution won't work on an SMP system, so explcitly
- * forbid it here.
+ * SPDX-License-Identifier: GPL-2.0
  */
-#define swp_is_buggy
-#endif
+#ifndef __ASM_ARM_ARM32_CMPXCHG_H
+#define __ASM_ARM_ARM32_CMPXCHG_H
+
+#include 
+#include 
+
+extern void __bad_cmpxchg(volatile void *ptr, int size);
 
 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int 
size)
 {
-   extern void __bad_xchg(volatile void *, int);
unsigned long ret;
-#ifdef swp_is_buggy
-   unsigned long flags;
-#endif
-#if __LINUX_ARM_ARCH__ >= 6
unsigned int tmp;
-#endif
 
prefetchw((const void *)ptr);
 
switch (size) {
-#if __LINUX_ARM_ARCH__ >= 6
-#ifndef CONFIG_CPU_V6 /* MIN ARCH >= V6K */
case 1:
asm volatile("@ __xchg1\n"
"1: ldrexb  %0, [%3]\n"
@@ -61,7 +39,6 @@ static inline unsigned long __xchg(unsigned long x, volatile 
void *ptr, int size
: "r" (x), "r" (ptr)
: "memory", "cc");
break;
-#endif
case 4:
asm volatile("@ __xchg4\n"
"1: ldrex   %0, [%3]\n"
@@ -72,42 +49,10 @@ static inline unsigned long __xchg(unsigned long x, 
volatile void *ptr, int size
: "r" (x), "r" (ptr)
: "memory", "cc");
break;
-#elif defined(swp_is_buggy)
-#ifdef CONFIG_SMP
-#error SMP is not supported on this platform
-#endif
-   case 1:
-   raw_local_irq_save(flags);
-   ret = *(volatile unsigned char *)ptr;
-   *(volatile unsigned char *)ptr = x;
-   raw_local_irq_restore(flags);
-   break;
 
-   case 4:
-   raw_local_irq_save(flags);
-   ret = *(volatile unsigned long *)ptr;
-   *(volatile unsigned long *)ptr = x;
-   raw_local_irq_restore(flags);
-   break;
-#else
-   case 1:
-   asm volatile("@ __xchg1\n"
-   "   swpb%0, %1, [%2]"
-   : "=" (ret)
-   : "r" (x), "r" (ptr)
-   : "memory", "cc");
-   break;
-   case 4:
-   asm volatile("@ __xchg4\n"
-   "   swp %0, %1, [%2]"
-   : "=" (ret)
-   : "r" (x), "r" (ptr)
-   : "memory", "cc");
-   break;
-#endif
default:
-   /* Cause a link-time error, the xchg() size is not supported */
-   __bad_xchg(ptr, size), ret = 0;
+   /* Cause a link-time error, the size is not supported */
+   __bad_cmpxchg(ptr, size), ret = 0;
break;
}
 
@@ -119,40 +64,6 @@ static inline unsigned long __xchg(unsigned long x, 
volatile void *ptr, int size
   sizeof(*(ptr))); \
 })
 
-#include 
-
-#if __LINUX_ARM_ARCH__ < 6
-/* min ARCH < ARMv6 */
-

[RFC PATCH v2 13/15] xen/arm32: port Linux's arm32 atomic.h to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

 - Drop arch_* prefixes.

 - Redirect include of  to , and swap
   usage of READ_ONCE()/WRITE_ONCE() to the __* versions accordingly.
   As discussed earlier in the series, we can do this because we're
   accessing an atomic_t's counter member, which is an int, so the
   extra checks performed by READ_ONCE()/WRITE_ONCE() are redundant,
   and this actually matches the Linux arm64 code which already uses
   the __* variants.

 - Drop support for pre-Armv7 systems.

 - Drop atomic64_t helper definitions as we don't currently have an
   atomic64_t in Xen.

 - Add explicit strict variants of atomic_{add,sub}_return() as Linux
   does not define these for arm32 and they're needed for Xen. These
   strict variants are just wrappers that sandwich a call to the
   relaxed variants between two smp_mb()'s.'

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm32/atomic.h | 357 +++--
 1 file changed, 28 insertions(+), 329 deletions(-)

diff --git a/xen/include/asm-arm/arm32/atomic.h 
b/xen/include/asm-arm/arm32/atomic.h
index ac6338dd9b..2d8cd3c586 100644
--- a/xen/include/asm-arm/arm32/atomic.h
+++ b/xen/include/asm-arm/arm32/atomic.h
@@ -1,31 +1,26 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
 /*
- *  arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
- *  Copyright (C) 1996 Russell King.
- *  Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ASM_ARM_ATOMIC_H
-#define __ASM_ARM_ATOMIC_H
+#ifndef __ASM_ARM_ARM32_ATOMIC_H
+#define __ASM_ARM_ARM32_ATOMIC_H
 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-
-#ifdef __KERNEL__
+#include 
+#include 
+#include 
+#include "system.h"
+#include "cmpxchg.h"
 
 /*
  * On ARM, ordinary assignment (str instruction) doesn't clear the local
  * strex/ldrex monitor on some implementations. The reason we can use it for
  * atomic_set() is the clrex or dummy strex done on every exception return.
  */
-#define atomic_read(v) READ_ONCE((v)->counter)
-#define atomic_set(v,i)WRITE_ONCE(((v)->counter), (i))
-
-#if __LINUX_ARM_ARCH__ >= 6
+#define atomic_read(v) __READ_ONCE((v)->counter)
+#define atomic_set(v,i)__WRITE_ONCE(((v)->counter), (i))
 
 /*
  * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
@@ -153,68 +148,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int 
a, int u)
 }
 #define atomic_fetch_add_unlessatomic_fetch_add_unless
 
-#else /* ARM_ARCH_6 */
-
-#ifdef CONFIG_SMP
-#error SMP not supported on pre-ARMv6 CPUs
-#endif
-
-#define ATOMIC_OP(op, c_op, asm_op)\
-static inline void atomic_##op(int i, atomic_t *v) \
-{  \
-   unsigned long flags;\
-   \
-   raw_local_irq_save(flags);  \
-   v->counter c_op i;  \
-   raw_local_irq_restore(flags);   \
-}  \
-
-#define ATOMIC_OP_RETURN(op, c_op, asm_op) \
-static inline int atomic_##op##_return(int i, atomic_t *v) \
-{  \
-   unsigned long flags;\
-   int val;\
-   \
-   raw_local_irq_save(flags);  \
-   v->counter c_op i;  \
-   val = v->counter;   \
-   raw_local_irq_restore(flags);   \
-   \
-   return val; \
-}
-
-#define ATOMIC_FETCH_OP(op, c_op, asm_op)  \
-static inline int atomic_fetch_##op(int i, atomic_t *v)
\
-{  \
-   unsigned long flags;\
-   int val;\
-   \
-   raw_local_irq_save(flags);  \
-   val = v->counter;   \
-   v->counter c_op i;  \
-   raw_local_irq_restore(flags);  

[RFC PATCH v2 05/15] xen/arm: pull in Linux atomics helpers and dependencies

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

This patch pulls in Linux's atomics helpers for arm32 and arm64, and
their dependencies, as-is.

Note that Linux's arm32 atomics helpers use the READ_ONCE() and
WRITE_ONCE() macros defined in , while Linux's
arm64 atomics helpers use __READ_ONCE() and __WRITE_ONCE().

The only difference is that the __* versions skip checking whether the
object being accessed is the same size as a native C type (e.g. char,
int, long, etc). Given the arm32 helpers are using the macros to access
an atomic_t's counter member, which is an int, we can skip this check by
using the __* versions like the arm64 code does.

I mention this here because it forms the first part of my justification
for *not* copying Linux's  to Xen; the size
check described above is performed by __native_word() defined in that
header.

The second part of my justification may be more contentious; as you'll
see in the next patch, I intend to drop the __unqual_scalar_typeof()
calls in __READ_ONCE() and __WRITE_ONCE(). This is because the pointer
to the atomic_t's counter member is always a basic (int *) so we don't
need this, and dropping it means we can avoid having to port Linux's
.

If people are against this approach, please bear in mind that the
current version of __unqual_scalar_typeof() in 
was actually the reason for Linux upgrading the minimum GCC version
required to 4.9 earlier this year so that they can guarantee C11
_Generic support [1].

So if we do want to take Linux's  we'll either
need to:

   A) bump up the minimum required version of GCC to 4.9 to match
  that required by Linux; in the Xen README we currently state
  GCC 4.8 for Arm and GCC 4.1.2_20070115 for x86.

or:

   B) mix-and-match an older version of Linux's 
  with the other headers taken from a newer version of Linux.

Thoughts?

[1] https://lkml.org/lkml/2020/7/8/1308

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm32/atomic.h   | 507 +++
 xen/include/asm-arm/arm32/cmpxchg.h  | 279 +
 xen/include/asm-arm/arm64/atomic.h   | 228 ++
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 353 
 xen/include/asm-arm/arm64/atomic_lse.h   | 419 +++
 xen/include/asm-arm/arm64/cmpxchg.h  | 285 +
 xen/include/asm-arm/arm64/lse.h  |  48 +++
 xen/include/xen/rwonce.h |  90 
 8 files changed, 2209 insertions(+)
 create mode 100644 xen/include/asm-arm/arm32/atomic.h
 create mode 100644 xen/include/asm-arm/arm32/cmpxchg.h
 create mode 100644 xen/include/asm-arm/arm64/atomic.h
 create mode 100644 xen/include/asm-arm/arm64/atomic_ll_sc.h
 create mode 100644 xen/include/asm-arm/arm64/atomic_lse.h
 create mode 100644 xen/include/asm-arm/arm64/cmpxchg.h
 create mode 100644 xen/include/asm-arm/arm64/lse.h
 create mode 100644 xen/include/xen/rwonce.h

diff --git a/xen/include/asm-arm/arm32/atomic.h 
b/xen/include/asm-arm/arm32/atomic.h
new file mode 100644
index 00..ac6338dd9b
--- /dev/null
+++ b/xen/include/asm-arm/arm32/atomic.h
@@ -0,0 +1,507 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *  arch/arm/include/asm/atomic.h
+ *
+ *  Copyright (C) 1996 Russell King.
+ *  Copyright (C) 2002 Deep Blue Solutions Ltd.
+ */
+#ifndef __ASM_ARM_ATOMIC_H
+#define __ASM_ARM_ATOMIC_H
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#ifdef __KERNEL__
+
+/*
+ * On ARM, ordinary assignment (str instruction) doesn't clear the local
+ * strex/ldrex monitor on some implementations. The reason we can use it for
+ * atomic_set() is the clrex or dummy strex done on every exception return.
+ */
+#define atomic_read(v) READ_ONCE((v)->counter)
+#define atomic_set(v,i)WRITE_ONCE(((v)->counter), (i))
+
+#if __LINUX_ARM_ARCH__ >= 6
+
+/*
+ * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
+ * store exclusive to ensure that these are atomic.  We may loop
+ * to ensure that the update happens.
+ */
+
+#define ATOMIC_OP(op, c_op, asm_op)\
+static inline void atomic_##op(int i, atomic_t *v) \
+{  \
+   unsigned long tmp;  \
+   int result; \
+   \
+   prefetchw(>counter); \
+   __asm__ __volatile__("@ atomic_" #op "\n"   \
+"1:ldrex   %0, [%3]\n" \
+"  " #asm_op " %0, %0, %4\n"   \
+"  strex   %1, %0, [%3]\n" \
+"  teq %1, #0\n"   \
+"  bne 1b" \
+   : "=" (result), "=" (tmp), "+Qo" (v->counter)   \
+ 

[RFC PATCH v2 06/15] xen: port Linux to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

 - Drop kasan related helpers.

 - Drop READ_ONCE() and WRITE_ONCE(); the __* versions are fine for now
   as the only callers in Xen are the arm32 atomics helpers which are
   always accessing an atomic_t's counter member, which is an int. This
   means we can swap the arm32 atomics helpers over to using the __*
   versions like the arm64 code does, removing a dependency on
for __native_word().

 - Relax __unqual_scalar_typeof() in __READ_ONCE() to just typeof().
   Similarly to above, the only callers in Xen are the arm32/arm64
   atomics helpers, which are always accessing an atomic_t's counter
   member as a regular (int *) which doesn't need unqual'ing. This means
   we can remove the other dependency on .

Please see previous patch in the series for expanded rationale on why
not having to port  to Xen makes life easier.

Signed-off-by: Ash Wilding 
---
 xen/include/xen/rwonce.h | 79 +++-
 1 file changed, 5 insertions(+), 74 deletions(-)

diff --git a/xen/include/xen/rwonce.h b/xen/include/xen/rwonce.h
index 6b47392d1c..d001e7e41e 100644
--- a/xen/include/xen/rwonce.h
+++ b/xen/include/xen/rwonce.h
@@ -1,90 +1,21 @@
-/* SPDX-License-Identifier: GPL-2.0 */
 /*
- * Prevent the compiler from merging or refetching reads or writes. The
- * compiler is also forbidden from reordering successive instances of
- * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
- * particular ordering. One way to make the compiler aware of ordering is to
- * put the two invocations of READ_ONCE or WRITE_ONCE in different C
- * statements.
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
- * These two macros will also work on aggregate data types like structs or
- * unions.
- *
- * Their two major use cases are: (1) Mediating communication between
- * process-level code and irq/NMI handlers, all running on the same CPU,
- * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
- * mutilate accesses that either do not require ordering or that interact
- * with an explicit memory barrier or atomic instruction that provides the
- * required ordering.
+ * SPDX-License-Identifier: GPL-2.0
  */
+
 #ifndef __ASM_GENERIC_RWONCE_H
 #define __ASM_GENERIC_RWONCE_H
 
 #ifndef __ASSEMBLY__
 
-#include 
-#include 
-#include 
-
-/*
- * Yes, this permits 64-bit accesses on 32-bit architectures. These will
- * actually be atomic in some cases (namely Armv7 + LPAE), but for others we
- * rely on the access being split into 2x32-bit accesses for a 32-bit quantity
- * (e.g. a virtual address) and a strong prevailing wind.
- */
-#define compiletime_assert_rwonce_type(t)  
\
-   compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),  
\
-   "Unsupported access size for {READ,WRITE}_ONCE().")
-
-/*
- * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
- * atomicity. Note that this may result in tears!
- */
-#ifndef __READ_ONCE
-#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
-#endif
-
-#define READ_ONCE(x)   \
-({ \
-   compiletime_assert_rwonce_type(x);  \
-   __READ_ONCE(x); \
-})
+#define __READ_ONCE(x) (*(const volatile typeof(x) *)&(x))
 
 #define __WRITE_ONCE(x, val)   \
 do {   \
*(volatile typeof(x) *)&(x) = (val);\
 } while (0)
 
-#define WRITE_ONCE(x, val) \
-do {   \
-   compiletime_assert_rwonce_type(x);  \
-   __WRITE_ONCE(x, val);   \
-} while (0)
-
-static __no_sanitize_or_inline
-unsigned long __read_once_word_nocheck(const void *addr)
-{
-   return __READ_ONCE(*(unsigned long *)addr);
-}
-
-/*
- * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
- * word from memory atomically but without telling KASAN/KCSAN. This is
- * usually used by unwinding code when walking the stack of a running process.
- */
-#define READ_ONCE_NOCHECK(x)   \
-({ \
-   compiletime_assert(sizeof(x) == sizeof(unsigned long),  \
-   "Unsupported access size for READ_ONCE_NOCHECK().");\
-   (typeof(x))__read_once_word_nocheck(&(x));  \
-})
-
-static __no_kasan_or_inline
-unsigned long read_word_at_a_time(const void *addr)
-{
-   kasan_check_read(addr, 1);
-   return *(unsigned long *)addr;
-}
 
 #endif /* __ASSEMBLY__ */
-#endif /* 

[RFC PATCH v2 04/15] xen/arm: Delete Xen atomics helpers

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

To maintain clean diffs and dissectability, let's delete the existing
Xen atomics helpers before pulling in the Linux versions.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm32/atomic.h  | 175 -
 xen/include/asm-arm/arm32/cmpxchg.h | 229 
 xen/include/asm-arm/arm64/atomic.h  | 148 --
 xen/include/asm-arm/arm64/cmpxchg.h | 183 --
 4 files changed, 735 deletions(-)
 delete mode 100644 xen/include/asm-arm/arm32/atomic.h
 delete mode 100644 xen/include/asm-arm/arm32/cmpxchg.h
 delete mode 100644 xen/include/asm-arm/arm64/atomic.h
 delete mode 100644 xen/include/asm-arm/arm64/cmpxchg.h

diff --git a/xen/include/asm-arm/arm32/atomic.h 
b/xen/include/asm-arm/arm32/atomic.h
deleted file mode 100644
index 2832a72792..00
--- a/xen/include/asm-arm/arm32/atomic.h
+++ /dev/null
@@ -1,175 +0,0 @@
-/*
- *  arch/arm/include/asm/atomic.h
- *
- *  Copyright (C) 1996 Russell King.
- *  Copyright (C) 2002 Deep Blue Solutions Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-#ifndef __ARCH_ARM_ARM32_ATOMIC__
-#define __ARCH_ARM_ARM32_ATOMIC__
-
-/*
- * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
- * store exclusive to ensure that these are atomic.  We may loop
- * to ensure that the update happens.
- */
-static inline void atomic_add(int i, atomic_t *v)
-{
-   unsigned long tmp;
-   int result;
-
-   prefetchw(>counter);
-   __asm__ __volatile__("@ atomic_add\n"
-"1:ldrex   %0, [%3]\n"
-"  add %0, %0, %4\n"
-"  strex   %1, %0, [%3]\n"
-"  teq %1, #0\n"
-"  bne 1b"
-   : "=" (result), "=" (tmp), "+Qo" (v->counter)
-   : "r" (>counter), "Ir" (i)
-   : "cc");
-}
-
-static inline int atomic_add_return(int i, atomic_t *v)
-{
-   unsigned long tmp;
-   int result;
-
-   smp_mb();
-   prefetchw(>counter);
-
-   __asm__ __volatile__("@ atomic_add_return\n"
-"1:ldrex   %0, [%3]\n"
-"  add %0, %0, %4\n"
-"  strex   %1, %0, [%3]\n"
-"  teq %1, #0\n"
-"  bne 1b"
-   : "=" (result), "=" (tmp), "+Qo" (v->counter)
-   : "r" (>counter), "Ir" (i)
-   : "cc");
-
-   smp_mb();
-
-   return result;
-}
-
-static inline void atomic_sub(int i, atomic_t *v)
-{
-   unsigned long tmp;
-   int result;
-
-   prefetchw(>counter);
-   __asm__ __volatile__("@ atomic_sub\n"
-"1:ldrex   %0, [%3]\n"
-"  sub %0, %0, %4\n"
-"  strex   %1, %0, [%3]\n"
-"  teq %1, #0\n"
-"  bne 1b"
-   : "=" (result), "=" (tmp), "+Qo" (v->counter)
-   : "r" (>counter), "Ir" (i)
-   : "cc");
-}
-
-static inline int atomic_sub_return(int i, atomic_t *v)
-{
-   unsigned long tmp;
-   int result;
-
-   smp_mb();
-   prefetchw(>counter);
-
-   __asm__ __volatile__("@ atomic_sub_return\n"
-"1:ldrex   %0, [%3]\n"
-"  sub %0, %0, %4\n"
-"  strex   %1, %0, [%3]\n"
-"  teq %1, #0\n"
-"  bne 1b"
-   : "=" (result), "=" (tmp), "+Qo" (v->counter)
-   : "r" (>counter), "Ir" (i)
-   : "cc");
-
-   smp_mb();
-
-   return result;
-}
-
-static inline void atomic_and(int m, atomic_t *v)
-{
-   unsigned long tmp;
-   int result;
-
-   prefetchw(>counter);
-   __asm__ __volatile__("@ atomic_and\n"
-"1:ldrex   %0, [%3]\n"
-"  and %0, %0, %4\n"
-"  strex   %1, %0, [%3]\n"
-"  teq %1, #0\n"
-"  bne 1b"
-   : "=" (result), "=" (tmp), "+Qo" (v->counter)
-   : "r" (>counter), "Ir" (m)
-   : "cc");
-}
-
-static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
-{
-   int oldval;
-   unsigned long res;
-
-   smp_mb();
-   prefetchw(>counter);
-
-   do {
-   __asm__ __volatile__("@ atomic_cmpxchg\n"
-   "ldrex  %1, [%3]\n"
-   "mov%0, #0\n"
-   "teq%1, %4\n"
-   "strexeq %0, %5, [%3]\n"
-   : "=" (res), "=" (oldval), "+Qo" (ptr->counter)
-   : "r" (>counter), "Ir" (old), "r" (new)
-   : "cc");
-   } while (res);
-
-   smp_mb();
-
-   return oldval;
-}
-
-static inline int __atomic_add_unless(atomic_t *v, int a, int u)
-{
-   int oldval, newval;
-   unsigned long tmp;
-
-   smp_mb();
-   prefetchw(>counter);
-
-   __asm__ __volatile__ ("@ atomic_add_unless\n"
-"1:ldrex   %0, [%4]\n"
-"  teq %0, %5\n"
-"  beq 2f\n"
-"  add %1, %0, %6\n"
-"  strex   %2, %1, [%4]\n"
-"  teq %2, #0\n"
-"  bne 1b\n"
-"2:"
-   : "=" (oldval), "=" (newval), "=" (tmp), "+Qo" (v->counter)
-   : "r" (>counter), "r" (u), "r" (a)
-   : "cc");
-
-   if (oldval != u)
-   smp_mb();
-
-  

[RFC PATCH v2 09/15] xen/arm64: port Linux's arm64 atomic_lse.h to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

As with the LL/SC atomics helpers, most of the "work" here is simply
deleting the atomic64_t helper definitions as we don't have an
atomic64_t type in Xen.

We do also need to s/__always_inline/always_inline/ to match the
qualifier name used by Xen.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm64/atomic_lse.h | 189 ++---
 1 file changed, 8 insertions(+), 181 deletions(-)

diff --git a/xen/include/asm-arm/arm64/atomic_lse.h 
b/xen/include/asm-arm/arm64/atomic_lse.h
index b3b0d43a7d..81613f7250 100644
--- a/xen/include/asm-arm/arm64/atomic_lse.h
+++ b/xen/include/asm-arm/arm64/atomic_lse.h
@@ -1,14 +1,15 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
+
 /*
- * Based on arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 1996 Russell King.
  * Copyright (C) 2002 Deep Blue Solutions Ltd.
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
 
-#ifndef __ASM_ATOMIC_LSE_H
-#define __ASM_ATOMIC_LSE_H
+#ifndef __ASM_ARM_ARM64_ATOMIC_LSE_H
+#define __ASM_ARM_ARM64_ATOMIC_LSE_H
 
 #define ATOMIC_OP(op, asm_op)  \
 static inline void __lse_atomic_##op(int i, atomic_t *v)   
\
@@ -163,182 +164,8 @@ ATOMIC_FETCH_OP_SUB(, al, "memory")
 
 #undef ATOMIC_FETCH_OP_SUB
 
-#define ATOMIC64_OP(op, asm_op)
\
-static inline void __lse_atomic64_##op(s64 i, atomic64_t *v)   \
-{  \
-   asm volatile(   \
-   __LSE_PREAMBLE  \
-"  " #asm_op " %[i], %[v]\n"   \
-   : [i] "+r" (i), [v] "+Q" (v->counter)   \
-   : "r" (v)); \
-}
-
-ATOMIC64_OP(andnot, stclr)
-ATOMIC64_OP(or, stset)
-ATOMIC64_OP(xor, steor)
-ATOMIC64_OP(add, stadd)
-
-#undef ATOMIC64_OP
-
-#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
-static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
-{  \
-   asm volatile(   \
-   __LSE_PREAMBLE  \
-"  " #asm_op #mb " %[i], %[i], %[v]"   \
-   : [i] "+r" (i), [v] "+Q" (v->counter)   \
-   : "r" (v)   \
-   : cl);  \
-   \
-   return i;   \
-}
-
-#define ATOMIC64_FETCH_OPS(op, asm_op) \
-   ATOMIC64_FETCH_OP(_relaxed,   , op, asm_op) \
-   ATOMIC64_FETCH_OP(_acquire,  a, op, asm_op, "memory")   \
-   ATOMIC64_FETCH_OP(_release,  l, op, asm_op, "memory")   \
-   ATOMIC64_FETCH_OP(, al, op, asm_op, "memory")
-
-ATOMIC64_FETCH_OPS(andnot, ldclr)
-ATOMIC64_FETCH_OPS(or, ldset)
-ATOMIC64_FETCH_OPS(xor, ldeor)
-ATOMIC64_FETCH_OPS(add, ldadd)
-
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_FETCH_OPS
-
-#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...)
\
-static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
-{  \
-   unsigned long tmp;  \
-   \
-   asm volatile(   \
-   __LSE_PREAMBLE  \
-   "   ldadd" #mb "%[i], %x[tmp], %[v]\n"  \
-   "   add %[i], %[i], %x[tmp]"\
-   : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=" (tmp)\
-   : "r" (v)   \
-   : cl);  \
-   \
-   return i;   \
-}
-
-ATOMIC64_OP_ADD_RETURN(_relaxed,   )
-ATOMIC64_OP_ADD_RETURN(_acquire,  a, "memory")
-ATOMIC64_OP_ADD_RETURN(_release,  l, "memory")
-ATOMIC64_OP_ADD_RETURN(, al, "memory")
-
-#undef ATOMIC64_OP_ADD_RETURN
-
-static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
-{
-   asm volatile(
-   __LSE_PREAMBLE
-   "   mvn %[i], %[i]\n"
-   "   stclr   %[i], %[v]"
-   : [i] "+" (i), [v] "+Q" (v->counter)
-   : "r" (v));
-}
-
-#define 

[RFC PATCH v2 08/15] xen/arm64: port Linux's arm64 atomic_ll_sc.h to Xen

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

Most of the "work" here is simply deleting the atomic64_t helper
definitions as we don't have an atomic64_t type in Xen.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 134 +--
 1 file changed, 6 insertions(+), 128 deletions(-)

diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h 
b/xen/include/asm-arm/arm64/atomic_ll_sc.h
index e1009c0f94..20b0cb174e 100644
--- a/xen/include/asm-arm/arm64/atomic_ll_sc.h
+++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h
@@ -1,16 +1,16 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Based on arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 1996 Russell King.
  * Copyright (C) 2002 Deep Blue Solutions Ltd.
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
 
-#ifndef __ASM_ATOMIC_LL_SC_H
-#define __ASM_ATOMIC_LL_SC_H
+#ifndef __ASM_ARM_ARM64_ATOMIC_LL_SC_H
+#define __ASM_ARM_ARM64_ATOMIC_LL_SC_H
 
-#include 
+#include 
 
 #ifdef CONFIG_ARM64_LSE_ATOMICS
 #define __LL_SC_FALLBACK(asm_ops)  \
@@ -134,128 +134,6 @@ ATOMIC_OPS(andnot, bic, )
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define ATOMIC64_OP(op, asm_op, constraint)\
-static inline void \
-__ll_sc_atomic64_##op(s64 i, atomic64_t *v)\
-{  \
-   s64 result; \
-   unsigned long tmp;  \
-   \
-   asm volatile("// atomic64_" #op "\n"\
-   __LL_SC_FALLBACK(   \
-"  prfmpstl1strm, %2\n"\
-"1:ldxr%0, %2\n"   \
-"  " #asm_op " %0, %0, %3\n"   \
-"  stxr%w1, %0, %2\n"  \
-"  cbnz%w1, 1b")   \
-   : "=" (result), "=" (tmp), "+Q" (v->counter)\
-   : __stringify(constraint) "r" (i)); \
-}
-
-#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
-static inline long \
-__ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
-{  \
-   s64 result; \
-   unsigned long tmp;  \
-   \
-   asm volatile("// atomic64_" #op "_return" #name "\n"\
-   __LL_SC_FALLBACK(   \
-"  prfmpstl1strm, %2\n"\
-"1:ld" #acq "xr%0, %2\n"   \
-"  " #asm_op " %0, %0, %3\n"   \
-"  st" #rel "xr%w1, %0, %2\n"  \
-"  cbnz%w1, 1b\n"  \
-"  " #mb ) \
-   : "=" (result), "=" (tmp), "+Q" (v->counter)\
-   : __stringify(constraint) "r" (i)   \
-   : cl);  \
-   \
-   return result;  \
-}
-
-#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
-static inline long \
-__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
-{  \
-   s64 result, val;\
-   unsigned long tmp;  \
-   \
-   asm volatile("// atomic64_fetch_" #op #name "\n"\
-   __LL_SC_FALLBACK(   \
-"  prfmpstl1strm, %3\n"\
-"1:ld" #acq "xr%0, %3\n"   \
-"  " #asm_op " %1, %0, %4\n"   \
-"  st" #rel "xr%w2, %1, %3\n"  \
-"  cbnz%w2, 1b\n"  \
-"  " #mb )   

[RFC PATCH v2 07/15] xen/arm: prepare existing Xen headers for Linux atomics

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

This small patch helps prepare  and the arm32/arm64
specific system.h headers to play nicely with the Linux atomics helpers:

 - We don't need the indirection around atomic_add_unless() anymore so
   let's just pull up the old Xen arm64 definition into here and use it
   for both arm32 and arm64.

 - We don't need an atomic_xchg() in  as the arm32/arm64
   specific cmpxchg.h from Linux defines it for us.

 - We drop the includes of  and  from
as they're not needed.

 - We swap out the include of the arm32/arm64 specific cmpxchg.h in the
   arm32/arm64 specific system.h and instead make them include atomic.h;
   this works around build issues from cmpxchg.h trying to use the
   __ll_sc_lse_body() macro before it's ready.

Signed-off-by: Ash Wilding 
---
 xen/include/asm-arm/arm32/system.h |  2 +-
 xen/include/asm-arm/arm64/system.h |  2 +-
 xen/include/asm-arm/atomic.h   | 15 +++
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-arm/arm32/system.h 
b/xen/include/asm-arm/arm32/system.h
index ab57abfbc5..88798d11db 100644
--- a/xen/include/asm-arm/arm32/system.h
+++ b/xen/include/asm-arm/arm32/system.h
@@ -2,7 +2,7 @@
 #ifndef __ASM_ARM32_SYSTEM_H
 #define __ASM_ARM32_SYSTEM_H
 
-#include 
+#include 
 
 #define local_irq_disable() asm volatile ( "cpsid i @ local_irq_disable\n" : : 
: "cc" )
 #define local_irq_enable()  asm volatile ( "cpsie i @ local_irq_enable\n" : : 
: "cc" )
diff --git a/xen/include/asm-arm/arm64/system.h 
b/xen/include/asm-arm/arm64/system.h
index 2e36573ac6..dfbbe4b87d 100644
--- a/xen/include/asm-arm/arm64/system.h
+++ b/xen/include/asm-arm/arm64/system.h
@@ -2,7 +2,7 @@
 #ifndef __ASM_ARM64_SYSTEM_H
 #define __ASM_ARM64_SYSTEM_H
 
-#include 
+#include 
 
 /* Uses uimm4 as a bitmask to select the clearing of one or more of
  * the DAIF exception mask bits:
diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
index ac2798d095..866f54d03c 100644
--- a/xen/include/asm-arm/atomic.h
+++ b/xen/include/asm-arm/atomic.h
@@ -2,8 +2,6 @@
 #define __ARCH_ARM_ATOMIC__
 
 #include 
-#include 
-#include 
 
 #define build_atomic_read(name, size, width, type) \
 static inline type name(const volatile type *addr) \
@@ -220,10 +218,19 @@ static inline int atomic_add_negative(int i, atomic_t *v)
 
 static inline int atomic_add_unless(atomic_t *v, int a, int u)
 {
-return __atomic_add_unless(v, a, u);
+   int c, old;
+
+   c = atomic_read(v);
+   while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c)
+   c = old;
+
+   return c;
 }
 
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+   return cmpxchg(&((v)->counter), (old), (new));
+}
 
 #endif /* __ARCH_ARM_ATOMIC__ */
 /*
-- 
2.24.3 (Apple Git-128)




[RFC PATCH v2 00/15] xen/arm: port Linux LL/SC and LSE atomics helpers to Xen.

2020-11-11 Thread Ash Wilding
From: Ash Wilding 


Hey,

I've found some time to improve this series a bit:


Changes since RFC v1


 - Broken up patches into smaller chunks to aid in readability.

 - As per Julien's feedback I've also introduced intermediary patches
   that first remove Xen's existing headers, then pull in the current
   Linux versions as-is. This means we only need to review the changes
   made while porting to Xen, rather than reviewing the existing Linux
   code.

 - Pull in Linux's  as  for
   __READ_ONCE()/__WRITE_ONCE() instead of putting these in Xen's
   . While doing this, provide justification for
   dropping Linux's  and relaxing the
   __READ_ONCE() usage of __unqual_scalar_typeof() to just typeof()
   (see patches #5 and #6).



Keeping the rest of the cover-letter unchanged as it would still be
good to discuss these things:


Arguments in favour of doing this
=

- Lets SMMUv3 driver switch to using  rather than
  maintaining its own implementation of the helpers.

- Provides mitigation against XSA-295 [2], which affects both arm32
  and arm64, across all versions of Xen, and may allow a domU to
  maliciously or erroneously DoS the hypervisor.

- All Armv8-A core implementations since ~2017 implement LSE so
  there is an argument to be made we are long overdue support in
  Xen. This is compounded by LSE atomics being more performant than
  LL/SC atomics in most real-world applications, especially at high
  core counts.

- We may be able to get improved performance when using LL/SC too
  as Linux provides helpers with relaxed ordering requirements which
  are currently not available in Xen, though for this we would need
  to go back through existing code to see where the more relaxed
  versions can be safely used.

- Anything else?


Arguments against doing this


- Limited testing infrastructure in place to ensure use of new
  atomics helpers does not introduce new bugs and regressions. This
  is a particularly strong argument given how difficult it can be to
  identify and debug malfunctioning atomics. The old adage applies,
  "If it ain't broke, don't fix it."

- Anything else?


Disclaimers
===

 - This is a very rough first-pass effort intended to spur the
   discussions along.

 - Only build-tested on arm64 and arm32, *not* run-tested. I did also
   build for x86_64 just to make I didn't inadvertently break that.

 - This version only tackles atomics and cmpxchg; I've not yet had a
   chance to look at locks so those are still using LL/SC.

 - The timeout variants of cmpxchg() (used by guest atomics) are still
   using LL/SC regardless of LSE being available, as these helpers are
   not provided by Linux so I just copied over the existing Xen ones.


Any further comments, guidance, and discussion on how to improve this 
approach and get LSE atomics support merged into Xen would be greatly
appreciated.

Thanks!
Ash.

[1] 
https://lore.kernel.org/xen-devel/13baac40-8b10-0def-4e44-0d8f655fc...@xen.org/
[2] https://xenbits.xen.org/xsa/advisory-295.txt

Ash Wilding (15):
  xen/arm: Support detection of CPU features in other ID registers
  xen/arm: Add detection of Armv8.1-LSE atomic instructions
  xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap
  xen/arm: Delete Xen atomics helpers
  xen/arm: pull in Linux atomics helpers and dependencies
  xen: port Linux  to Xen
  xen/arm: prepare existing Xen headers for Linux atomics
  xen/arm64: port Linux LL/SC atomics helpers to Xen
  xen/arm64: port Linux LSE atomics helpers to Xen
  xen/arm64: port Linux's arm64 cmpxchg.h to Xen
  xen/arm64: port Linux's arm64 atomic.h to Xen
  xen/arm64: port Linux's arm64 lse.h to Xen
  xen/arm32: port Linux's arm32 atomic.h to Xen
  xen/arm32: port Linux's arm32 cmpxchg.h to Xen
  xen/arm: remove dependency on gcc built-in __sync_fetch_and_add()

 xen/arch/arm/Kconfig |  11 +
 xen/arch/arm/Makefile|   1 +
 xen/arch/arm/lse.c   |  13 +
 xen/arch/arm/setup.c |  13 +-
 xen/include/asm-arm/arm32/atomic.h   | 253 +++-
 xen/include/asm-arm/arm32/cmpxchg.h  | 388 +++---
 xen/include/asm-arm/arm32/system.h   |   2 +-
 xen/include/asm-arm/arm64/atomic.h   | 236 +--
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 231 +++
 xen/include/asm-arm/arm64/atomic_lse.h   | 246 +++
 xen/include/asm-arm/arm64/cmpxchg.h  | 497 ---
 xen/include/asm-arm/arm64/lse.h  |  48 +++
 xen/include/asm-arm/arm64/system.h   |   2 +-
 xen/include/asm-arm/atomic.h |  15 +-
 xen/include/asm-arm/cpufeature.h |  57 +--
 xen/include/asm-arm/system.h |   9 +-
 xen/include/xen/rwonce.h |  21 +
 17 files changed, 1469 insertions(+), 574 deletions(-)
 create mode 100644 xen/arch/arm/lse.c

[RFC PATCH v2 03/15] xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

This patch introduces the ARM64_HAS_LSE_ATOMICS hwcap.

While doing this, CONFIG_ARM64_LSE_ATOMICS is added to control whether
the hwcap is actually detected and set at runtime. Without this Kconfig
being set we will always fallback on LL/SC atomics using Armv8.0 exlusive
accesses.

Note this patch does not actually add the ALTERNATIVE() switching based
on the hwcap being detected and set; that comes later in the series.

Signed-off-by: Ash Wilding 
---
 xen/arch/arm/Kconfig | 11 +++
 xen/arch/arm/Makefile|  1 +
 xen/arch/arm/lse.c   | 13 +
 xen/include/asm-arm/cpufeature.h |  3 ++-
 4 files changed, 27 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/lse.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..febc41e492 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -78,6 +78,17 @@ config SBSA_VUART_CONSOLE
  Allows a guest to use SBSA Generic UART as a console. The
  SBSA Generic UART implements a subset of ARM PL011 UART.
 
+config ARM64_LSE_ATOMICS
+   bool "Armv8.1-LSE Atomics"
+   depends on ARM_64 && HAS_ALTERNATIVE
+   default y
+   ---help---
+   When set, dynamically patch Xen at runtime to use Armv8.1-LSE
+   atomics when supported by the system.
+
+   When unset, or when Armv8.1-LSE atomics are not supported by the
+   system, fallback on LL/SC atomics using Armv8.0 exclusive accesses.
+
 config ARM_SSBD
bool "Speculative Store Bypass Disable" if EXPERT
depends on HAS_ALTERNATIVE
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e68bb..cadd0ad253 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -63,6 +63,7 @@ obj-y += vsmc.o
 obj-y += vpsci.o
 obj-y += vuart.o
 extra-y += $(TARGET_SUBARCH)/head.o
+obj-$(CONFIG_ARM64_LSE_ATOMICS) += lse.o
 
 #obj-bin-y += o
 
diff --git a/xen/arch/arm/lse.c b/xen/arch/arm/lse.c
new file mode 100644
index 00..8274dac671
--- /dev/null
+++ b/xen/arch/arm/lse.c
@@ -0,0 +1,13 @@
+
+#include 
+#include 
+
+static int __init update_lse_caps(void)
+{
+if ( cpu_has_lse_atomics )
+cpus_set_cap(ARM64_HAS_LSE_ATOMICS);
+
+return 0;
+}
+
+__initcall(update_lse_caps);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 2366926e82..48c172ee29 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -42,8 +42,9 @@
 #define ARM_SSBD 7
 #define ARM_SMCCC_1_1 8
 #define ARM64_WORKAROUND_AT_SPECULATE 9
+#define ARM64_HAS_LSE_ATOMICS 10
 
-#define ARM_NCAPS   10
+#define ARM_NCAPS   11
 
 #ifndef __ASSEMBLY__
 
-- 
2.24.3 (Apple Git-128)




[RFC PATCH v2 01/15] xen/arm: Support detection of CPU features in other ID registers

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

The current Arm boot_cpu_feature64() and boot_cpu_feature32() macros
are hardcoded to only detect features in ID_AA64PFR{0,1}_EL1 and
ID_PFR{0,1} respectively.

This patch replaces these macros with a new macro, boot_cpu_feature(),
which takes an explicit ID register name as an argument.

While we're here, cull cpu_feature64() and cpu_feature32() as they
have no callers (we only ever use the boot CPU features), and update
the printk() messages in setup.c to use the new macro.

Signed-off-by: Ash Wilding 
---
 xen/arch/arm/setup.c |  8 +++---
 xen/include/asm-arm/cpufeature.h | 44 +++-
 2 files changed, 24 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7fcff9af2a..5121f06fc5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -134,16 +134,16 @@ static void __init processor_id(void)
cpu_has_gicv3 ? " GICv3-SysReg" : "");
 
 /* Warn user if we find unknown floating-point features */
-if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
+if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >= 2) )
 printk(XENLOG_WARNING "WARNING: Unknown Floating-point ID:%d, "
"this may result in corruption on the platform\n",
-   boot_cpu_feature64(fp));
+   boot_cpu_feature(pfr64, fp));
 
 /* Warn user if we find unknown AdvancedSIMD features */
-if ( cpu_has_simd && (boot_cpu_feature64(simd) >= 2) )
+if ( cpu_has_simd && (boot_cpu_feature(pfr64, simd) >= 2) )
 printk(XENLOG_WARNING "WARNING: Unknown AdvancedSIMD ID:%d, "
"this may result in corruption on the platform\n",
-   boot_cpu_feature64(simd));
+   boot_cpu_feature(pfr64, simd));
 
 printk("  Debug Features: %016"PRIx64" %016"PRIx64"\n",
boot_cpu_data.dbg64.bits[0], boot_cpu_data.dbg64.bits[1]);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 10878ead8a..f9281ea343 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -1,39 +1,35 @@
 #ifndef __ASM_ARM_CPUFEATURE_H
 #define __ASM_ARM_CPUFEATURE_H
 
+#define boot_cpu_feature(idreg, feat) (boot_cpu_data.idreg.feat)
+
 #ifdef CONFIG_ARM_64
-#define cpu_feature64(c, feat) ((c)->pfr64.feat)
-#define boot_cpu_feature64(feat)   (boot_cpu_data.pfr64.feat)
-
-#define cpu_has_el0_32(boot_cpu_feature64(el0) == 2)
-#define cpu_has_el0_64(boot_cpu_feature64(el0) >= 1)
-#define cpu_has_el1_32(boot_cpu_feature64(el1) == 2)
-#define cpu_has_el1_64(boot_cpu_feature64(el1) >= 1)
-#define cpu_has_el2_32(boot_cpu_feature64(el2) == 2)
-#define cpu_has_el2_64(boot_cpu_feature64(el2) >= 1)
-#define cpu_has_el3_32(boot_cpu_feature64(el3) == 2)
-#define cpu_has_el3_64(boot_cpu_feature64(el3) >= 1)
-#define cpu_has_fp(boot_cpu_feature64(fp) < 8)
-#define cpu_has_simd  (boot_cpu_feature64(simd) < 8)
-#define cpu_has_gicv3 (boot_cpu_feature64(gic) == 1)
+#define cpu_has_el0_32  (boot_cpu_feature(pfr64, el0) == 2)
+#define cpu_has_el0_64  (boot_cpu_feature(pfr64, el0) >= 1)
+#define cpu_has_el1_32  (boot_cpu_feature(pfr64, el1) == 2)
+#define cpu_has_el1_64  (boot_cpu_feature(pfr64, el1) >= 1)
+#define cpu_has_el2_32  (boot_cpu_feature(pfr64, el2) == 2)
+#define cpu_has_el2_64  (boot_cpu_feature(pfr64, el2) >= 1)
+#define cpu_has_el3_32  (boot_cpu_feature(pfr64, el3) == 2)
+#define cpu_has_el3_64  (boot_cpu_feature(pfr64, el3) >= 1)
+#define cpu_has_fp  (boot_cpu_feature(pfr64, fp) < 8)
+#define cpu_has_simd(boot_cpu_feature(pfr64, simd) < 8)
+#define cpu_has_gicv3   (boot_cpu_feature(pfr64, gic) == 1)
 #endif
 
-#define cpu_feature32(c, feat) ((c)->pfr32.feat)
-#define boot_cpu_feature32(feat)   (boot_cpu_data.pfr32.feat)
-
-#define cpu_has_arm   (boot_cpu_feature32(arm) == 1)
-#define cpu_has_thumb (boot_cpu_feature32(thumb) >= 1)
-#define cpu_has_thumb2(boot_cpu_feature32(thumb) >= 3)
-#define cpu_has_jazelle   (boot_cpu_feature32(jazelle) > 0)
-#define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
+#define cpu_has_arm   (boot_cpu_feature(pfr32, arm) == 1)
+#define cpu_has_thumb (boot_cpu_feature(pfr32, thumb) >= 1)
+#define cpu_has_thumb2(boot_cpu_feature(pfr32, thumb) >= 3)
+#define cpu_has_jazelle   (boot_cpu_feature(pfr32, jazelle) > 0)
+#define cpu_has_thumbee   (boot_cpu_feature(pfr32, thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
 #ifdef CONFIG_ARM_32
-#define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
+#define cpu_has_gentimer  (boot_cpu_feature(pfr32, gentimer) == 1)
 #else
 #define cpu_has_gentimer  (1)
 #endif
-#define cpu_has_security  (boot_cpu_feature32(security) > 0)
+#define cpu_has_security  (boot_cpu_feature(pfr32, security) > 0)
 
 #define 

[RFC PATCH v2 02/15] xen/arm: Add detection of Armv8.1-LSE atomic instructions

2020-11-11 Thread Ash Wilding
From: Ash Wilding 

Use the new infrastructure for detecting CPU features in other ID
registers to detect the presence of Armv8.1-LSE atomic instructions,
as reported by ID_AA64ISAR0_EL1.Atomic.

While we're here, print detection of these instructions in setup.c's
processor_id().

Signed-off-by: Ash Wilding 
---
 xen/arch/arm/setup.c |  5 +++--
 xen/include/asm-arm/cpufeature.h | 10 +-
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5121f06fc5..138e1957c5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -128,10 +128,11 @@ static void __init processor_id(void)
cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-printk("Extensions:%s%s%s\n",
+printk("Extensions:%s%s%s%s\n",
cpu_has_fp ? " FloatingPoint" : "",
cpu_has_simd ? " AdvancedSIMD" : "",
-   cpu_has_gicv3 ? " GICv3-SysReg" : "");
+   cpu_has_gicv3 ? " GICv3-SysReg" : "",
+   cpu_has_lse_atomics ? " LSE-Atomics" : "");
 
 /* Warn user if we find unknown floating-point features */
 if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >= 2) )
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index f9281ea343..2366926e82 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -15,6 +15,7 @@
 #define cpu_has_fp  (boot_cpu_feature(pfr64, fp) < 8)
 #define cpu_has_simd(boot_cpu_feature(pfr64, simd) < 8)
 #define cpu_has_gicv3   (boot_cpu_feature(pfr64, gic) == 1)
+#define cpu_has_lse_atomics (boot_cpu_feature(isa64, atomic) == 2)
 #endif
 
 #define cpu_has_arm   (boot_cpu_feature(pfr32, arm) == 1)
@@ -187,8 +188,15 @@ struct cpuinfo_arm {
 };
 } mm64;
 
-struct {
+union {
 uint64_t bits[2];
+struct {
+unsigned long __res0 : 20;
+unsigned long atomic : 4;
+unsigned long __res1 : 40;
+
+unsigned long __res2 : 64;
+};
 } isa64;
 
 #endif
-- 
2.24.3 (Apple Git-128)




[PATCH 10/10] xl / libxl: add 'ex_processor_mask' into 'libxl_viridian_enlightenment'

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

Adding the new value into the enumeration makes it immediately available
to xl, so this patch adjusts the xl.cfg(5) documentation accordingly.

Signed-off-by: Paul Durrant 
---
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Anthony PERARD 
---
 docs/man/xl.cfg.5.pod.in | 8 
 tools/include/libxl.h| 7 +++
 tools/libs/light/libxl_types.idl | 1 +
 tools/libs/light/libxl_x86.c | 3 +++
 4 files changed, 19 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..3f0f8de1e988 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2318,6 +2318,14 @@ This set incorporates use of a hypercall for 
interprocessor interrupts.
 This enlightenment may improve performance of Windows guests with multiple
 virtual CPUs.
 
+=item B
+
+This set enables new hypercall variants taking a variably-sized sparse
+B as an argument, rather than a simple 64-bit
+mask. Hence this enlightenment must be specified for guests with more
+than 64 vCPUs if B and/or B are also
+specified.
+
 =item B
 
 This is a special value that enables the default set of groups, which
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..eaffccb30f37 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,13 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS indicates that the
+ * 'ex_processor_masks' value is present in the viridian enlightenment
+ * enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..05324736b744 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -238,6 +238,7 @@ libxl_viridian_enlightenment = 
Enumeration("viridian_enlightenment", [
 (7, "synic"),
 (8, "stimer"),
 (9, "hcall_ipi"),
+(10, "ex_processor_masks"),
 ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index e18274cc10e2..86d272999d67 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -366,6 +366,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, 
uint32_t domid,
 if (libxl_bitmap_test(, 
LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI))
 mask |= HVMPV_hcall_ipi;
 
+if (libxl_bitmap_test(, 
LIBXL_VIRIDIAN_ENLIGHTENMENT_EX_PROCESSOR_MASKS))
+mask |= HVMPV_ex_processor_masks;
+
 if (mask != 0 &&
 xc_hvm_param_set(CTX->xch,
  domid,
-- 
2.20.1




[PATCH 07/10] viridian: add ExProcessorMasks variant of the IPI hypercall

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

A previous patch introduced variants of the flush hypercalls that take a
'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
(HVCALL_SEND_IPI_EX).

NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
  not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
  'ExProcessorMasks' is not yet advertised via CPUID.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 66 
 1 file changed, 66 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index 1226e1596a1c..e899dd1e9f55 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -865,6 +865,67 @@ static int hvcall_ipi(union hypercall_input *input,
 return 0;
 }
 
+static int hvcall_ipi_ex(union hypercall_input *input,
+ union hypercall_output *output,
+ unsigned long input_params_gpa,
+ unsigned long output_params_gpa)
+{
+struct hypercall_vpmask *vpmask = _cpu(hypercall_vpmask);
+struct {
+uint32_t vector;
+uint8_t target_vtl;
+uint8_t reserved_zero[3];
+struct hv_vpset set;
+} input_params;
+struct hypercall_vpset *vpset = _cpu(hypercall_vpset);
+struct hv_vpset *set = >set;
+size_t size;
+int rc;
+
+/* These hypercalls should never use the fast-call convention. */
+if ( input->fast )
+return -EINVAL;
+
+/* Get input parameters. */
+if ( hvm_copy_from_guest_phys(_params, input_params_gpa,
+  sizeof(input_params)) != HVMTRANS_okay )
+return -EINVAL;
+
+if ( input_params.target_vtl ||
+ input_params.reserved_zero[0] ||
+ input_params.reserved_zero[1] ||
+ input_params.reserved_zero[2] )
+return HV_STATUS_INVALID_PARAMETER;
+
+if ( input_params.vector < 0x10 || input_params.vector > 0xff )
+return HV_STATUS_INVALID_PARAMETER;
+
+*set = input_params.set;
+if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+{
+unsigned long offset = offsetof(typeof(input_params),
+set.bank_contents);
+
+size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+if ( hvm_copy_from_guest_phys(>bank_contents,
+  input_params_gpa + offset,
+  size) != HVMTRANS_okay)
+return -EINVAL;
+
+size += sizeof(*set);
+}
+else
+size = sizeof(*set);
+
+rc = hv_vpset_to_vpmask(set, size, vpmask);
+if ( rc )
+return rc;
+
+send_ipi(vpmask, input_params.vector);
+
+return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
 struct vcpu *curr = current;
@@ -919,6 +980,11 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 output_params_gpa);
 break;
 
+case HVCALL_SEND_IPI_EX:
+rc = hvcall_ipi_ex(, , input_params_gpa,
+   output_params_gpa);
+break;
+
 default:
 gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
 input.call_code);
-- 
2.20.1




[PATCH 08/10] viridian: log initial invocation of each type of hypercall

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

To make is simpler to observe which viridian hypercalls are issued by a
particular Windows guest, this patch adds a per-domain mask to track them.
Each type of hypercall causes a different bit to be set in the mask and
when the bit transitions from clear to set, a log line is emitted showing
the name of the hypercall and the domain that issued it.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 21 +
 xen/include/asm-x86/hvm/viridian.h   |  8 
 2 files changed, 29 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index e899dd1e9f55..670fec3a4870 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -930,6 +930,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 {
 struct vcpu *curr = current;
 struct domain *currd = curr->domain;
+struct viridian_domain *vd = currd->arch.hvm.viridian;
 int mode = hvm_guest_x86_mode(curr);
 unsigned long input_params_gpa, output_params_gpa;
 int rc = 0;
@@ -957,6 +958,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 switch ( input.call_code )
 {
 case HVCALL_NOTIFY_LONG_SPIN_WAIT:
+if ( !test_and_set_bit(_HCALL_spin_wait, >hypercall_flags) )
+printk(XENLOG_G_INFO "d%d: VIRIDIAN 
HVCALL_NOTIFY_LONG_SPIN_WAIT\n",
+   currd->domain_id);
+
 /*
  * See section 14.5.1 of the specification.
  */
@@ -965,22 +970,38 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+if ( !test_and_set_bit(_HCALL_flush, >hypercall_flags) )
+printk(XENLOG_G_INFO "%pd: VIRIDIAN 
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST\n",
+   currd);
+
 rc = hvcall_flush(, , input_params_gpa,
   output_params_gpa);
 break;
 
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+if ( !test_and_set_bit(_HCALL_flush_ex, >hypercall_flags) )
+printk(XENLOG_G_INFO "%pd: VIRIDIAN 
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX\n",
+   currd);
+
 rc = hvcall_flush_ex(, , input_params_gpa,
  output_params_gpa);
 break;
 
 case HVCALL_SEND_IPI:
+if ( !test_and_set_bit(_HCALL_ipi, >hypercall_flags) )
+printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI\n",
+   currd);
+
 rc = hvcall_ipi(, , input_params_gpa,
 output_params_gpa);
 break;
 
 case HVCALL_SEND_IPI_EX:
+if ( !test_and_set_bit(_HCALL_ipi_ex, >hypercall_flags) )
+printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI_EX\n",
+   currd);
+
 rc = hvcall_ipi_ex(, , input_params_gpa,
output_params_gpa);
 break;
diff --git a/xen/include/asm-x86/hvm/viridian.h 
b/xen/include/asm-x86/hvm/viridian.h
index cbf77d9c760b..d176c4b9153b 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -59,6 +59,14 @@ struct viridian_domain
 {
 union hv_guest_os_id guest_os_id;
 union hv_vp_assist_page_msr hypercall_gpa;
+unsigned long hypercall_flags;
+
+#define _HCALL_spin_wait 0
+#define _HCALL_flush 1
+#define _HCALL_flush_ex 2
+#define _HCALL_ipi 3
+#define _HCALL_ipi_ex 4
+
 struct viridian_time_ref_count time_ref_count;
 struct viridian_page reference_tsc;
 };
-- 
2.20.1




[PATCH 02/10] viridian: move IPI hypercall implementation into separate function

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

This patch moves the implementation of HVCALL_SEND_IPI that is currently
inline in viridian_hypercall() into a new hvcall_ipi() function.

The new function returns Xen errno values similarly to hvcall_flush(). Hence
the errno translation code in viridial_hypercall() is generalized, removing
the need for the local 'status' variable.

NOTE: The formatting of the 'out' label also corrected as per CODING_STYLE
  and the code is adjusted to avoid a register copy-back if 'mode' is
  neither 8 nor 4.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 170 ++-
 1 file changed, 87 insertions(+), 83 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index f1a1b6a8af82..c4f720f58d6d 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -581,13 +581,72 @@ static int hvcall_flush(union hypercall_input *input,
 return 0;
 }
 
+static int hvcall_ipi(union hypercall_input *input,
+  union hypercall_output *output,
+  unsigned long input_params_gpa,
+  unsigned long output_params_gpa)
+{
+struct domain *currd = current->domain;
+struct vcpu *v;
+uint32_t vector;
+uint64_t vcpu_mask;
+
+/* Get input parameters. */
+if ( input->fast )
+{
+if ( input_params_gpa >> 32 )
+return -EINVAL;
+
+vector = input_params_gpa;
+vcpu_mask = output_params_gpa;
+}
+else
+{
+struct {
+uint32_t vector;
+uint8_t target_vtl;
+uint8_t reserved_zero[3];
+uint64_t vcpu_mask;
+} input_params;
+
+if ( hvm_copy_from_guest_phys(_params, input_params_gpa,
+  sizeof(input_params)) != HVMTRANS_okay )
+return -EINVAL;
+
+if ( input_params.target_vtl ||
+ input_params.reserved_zero[0] ||
+ input_params.reserved_zero[1] ||
+ input_params.reserved_zero[2] )
+return -EINVAL;
+
+vector = input_params.vector;
+vcpu_mask = input_params.vcpu_mask;
+}
+
+if ( vector < 0x10 || vector > 0xff )
+return -EINVAL;
+
+for_each_vcpu ( currd, v )
+{
+if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+return -EINVAL;
+
+if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+continue;
+
+vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+}
+
+return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
 struct vcpu *curr = current;
 struct domain *currd = curr->domain;
 int mode = hvm_guest_x86_mode(curr);
 unsigned long input_params_gpa, output_params_gpa;
-uint16_t status = HV_STATUS_SUCCESS;
+int rc = 0;
 union hypercall_input input;
 union hypercall_output output = {};
 
@@ -616,92 +675,18 @@ int viridian_hypercall(struct cpu_user_regs *regs)
  * See section 14.5.1 of the specification.
  */
 do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
-status = HV_STATUS_SUCCESS;
 break;
 
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-{
-int rc = hvcall_flush(, , input_params_gpa,
-  output_params_gpa);
-
-switch ( rc )
-{
-case 0:
-break;
-
-case -ERESTART:
-return HVM_HCALL_preempted;
-
-default:
-ASSERT_UNREACHABLE();
-/* Fallthrough */
-case -EINVAL:
-status = HV_STATUS_INVALID_PARAMETER;
-break;
-}
-
+rc = hvcall_flush(, , input_params_gpa,
+  output_params_gpa);
 break;
-}
 
 case HVCALL_SEND_IPI:
-{
-struct vcpu *v;
-uint32_t vector;
-uint64_t vcpu_mask;
-
-status = HV_STATUS_INVALID_PARAMETER;
-
-/* Get input parameters. */
-if ( input.fast )
-{
-if ( input_params_gpa >> 32 )
-break;
-
-vector = input_params_gpa;
-vcpu_mask = output_params_gpa;
-}
-else
-{
-struct {
-uint32_t vector;
-uint8_t target_vtl;
-uint8_t reserved_zero[3];
-uint64_t vcpu_mask;
-} input_params;
-
-if ( hvm_copy_from_guest_phys(_params, input_params_gpa,
-  sizeof(input_params)) !=
- HVMTRANS_okay )
-break;
-
-if ( input_params.target_vtl ||
- input_params.reserved_zero[0] ||
- input_params.reserved_zero[1] ||
- input_params.reserved_zero[2] )
-break;

[PATCH 00/10] viridian: add support for ExProcessorMasks

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

Paul Durrant (10):
  viridian: move flush hypercall implementation into separate function
  viridian: move IPI hypercall implementation into separate function
  viridian: introduce a per-cpu hypercall_vpmask and accessor
functions...
  viridian: use hypercall_vpmask in hvcall_ipi()
  viridian: use softirq batching in hvcall_ipi()
  viridian: add ExProcessorMasks variants of the flush hypercalls
  viridian: add ExProcessorMasks variant of the IPI hypercall
  viridian: log initial invocation of each type of hypercall
  viridian: add a new '_HVMPV_ex_processor_masks' bit into
HVM_PARAM_VIRIDIAN...
  xl / libxl: add 'ex_processor_mask' into
'libxl_viridian_enlightenment'

 docs/man/xl.cfg.5.pod.in |   8 +
 tools/include/libxl.h|   7 +
 tools/libs/light/libxl_types.idl |   1 +
 tools/libs/light/libxl_x86.c |   3 +
 xen/arch/x86/hvm/viridian/viridian.c | 595 +--
 xen/include/asm-x86/hvm/viridian.h   |   8 +
 xen/include/public/hvm/params.h  |   7 +-
 7 files changed, 506 insertions(+), 123 deletions(-)

-- 
2.20.1




[PATCH 09/10] viridian: add a new '_HVMPV_ex_processor_masks' bit into HVM_PARAM_VIRIDIAN...

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

... and advertise ExProcessorMasks support if it is set.

Support is advertised by setting bit 11 in CPUID:4004:EAX.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
Cc: George Dunlap 
Cc: Ian Jackson 
Cc: Julien Grall 
Cc: Stefano Stabellini 
---
 xen/arch/x86/hvm/viridian/viridian.c | 3 +++
 xen/include/public/hvm/params.h  | 7 ++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index 670fec3a4870..4d570adc21c0 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -84,6 +84,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 #define CPUID4A_MSR_BASED_APIC (1 << 3)
 #define CPUID4A_RELAX_TIMER_INT(1 << 5)
 #define CPUID4A_SYNTHETIC_CLUSTER_IPI  (1 << 10)
+#define CPUID4A_EX_PROCESSOR_MASKS (1 << 11)
 
 /* Viridian CPUID leaf 6: Implementation HW features detected and in use */
 #define CPUID6A_APIC_OVERLAY(1 << 0)
@@ -197,6 +198,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t 
leaf,
 res->a |= CPUID4A_MSR_BASED_APIC;
 if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
 res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
+if ( viridian_feature_mask(d) & HVMPV_ex_processor_masks )
+res->a |= CPUID4A_EX_PROCESSOR_MASKS;
 
 /*
  * This value is the recommended number of attempts to try to
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 0e3fdca09646..3b0a0f45da53 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -164,6 +164,10 @@
 #define _HVMPV_hcall_ipi 9
 #define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
 
+/* Enable ExProcessorMasks */
+#define _HVMPV_ex_processor_masks 10
+#define HVMPV_ex_processor_masks (1 << _HVMPV_ex_processor_masks)
+
 #define HVMPV_feature_mask \
 (HVMPV_base_freq | \
  HVMPV_no_freq | \
@@ -174,7 +178,8 @@
  HVMPV_crash_ctl | \
  HVMPV_synic | \
  HVMPV_stimer | \
- HVMPV_hcall_ipi)
+ HVMPV_hcall_ipi | \
+ HVMPV_ex_processor_masks)
 
 #endif
 
-- 
2.20.1




[PATCH 06/10] viridian: add ExProcessorMasks variants of the flush hypercalls

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.

This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).

NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
  support needs to be advertised via CPUID. This will be done in a
  subsequent patch.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 147 +++
 1 file changed, 147 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index 765d53016c02..1226e1596a1c 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -553,6 +553,83 @@ static unsigned int vpmask_next(struct hypercall_vpmask 
*vpmask, unsigned int vp
 (vp) < HVM_MAX_VCPUS; \
 (vp) = vpmask_next(vpmask, vp))
 
+struct hypercall_vpset {
+struct hv_vpset set;
+uint64_t __bank_contents[64];
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
+
+static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
+{
+uint64_t bank_mask;
+unsigned int nr = 0;
+
+for ( bank_mask = vpset->valid_bank_mask; bank_mask; bank_mask >>= 1 )
+if ( bank_mask & 1 )
+nr++;
+
+return nr;
+}
+
+static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set, size_t size,
+   struct hypercall_vpmask *vpmask)
+{
+switch ( set->format )
+{
+case HV_GENERIC_SET_ALL:
+vpmask_fill(vpmask);
+return 0;
+
+case HV_GENERIC_SET_SPARSE_4K:
+{
+uint64_t bank_mask;
+unsigned int bank = 0, vp = 0;
+
+vpmask_empty(vpmask);
+for ( bank_mask = set->valid_bank_mask; bank_mask; bank_mask >>= 1 )
+{
+/* Make sure we won't dereference past the end of the array */
+if ( (void *)(set->bank_contents + bank) >=
+ (void *)set + size )
+{
+ASSERT_UNREACHABLE();
+return -EINVAL;
+}
+
+if ( bank_mask & 1 )
+{
+uint64_t mask = set->bank_contents[bank];
+unsigned int i;
+
+for ( i = 0; i < 64; i++, vp++ )
+{
+if ( mask & 1 )
+{
+if ( vp >= HVM_MAX_VCPUS )
+return -EINVAL;
+
+vpmask_set(vpmask, vp);
+}
+
+mask >>= 1;
+}
+
+bank++;
+}
+else
+vp += 64;
+}
+return 0;
+}
+
+default:
+break;
+}
+
+return -EINVAL;
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -644,6 +721,70 @@ static int hvcall_flush(union hypercall_input *input,
 return 0;
 }
 
+static int hvcall_flush_ex(union hypercall_input *input,
+   union hypercall_output *output,
+   unsigned long input_params_gpa,
+   unsigned long output_params_gpa)
+{
+struct hypercall_vpmask *vpmask = _cpu(hypercall_vpmask);
+struct {
+uint64_t address_space;
+uint64_t flags;
+struct hv_vpset set;
+} input_params;
+
+/* These hypercalls should never use the fast-call convention. */
+if ( input->fast )
+return -EINVAL;
+
+/* Get input parameters. */
+if ( hvm_copy_from_guest_phys(_params, input_params_gpa,
+  sizeof(input_params)) != HVMTRANS_okay )
+return -EINVAL;
+
+if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+vpmask_fill(vpmask);
+else
+{
+struct hypercall_vpset *vpset = _cpu(hypercall_vpset);
+struct hv_vpset *set = >set;
+size_t size;
+int rc;
+
+*set = input_params.set;
+if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+{
+unsigned long offset = offsetof(typeof(input_params),
+set.bank_contents);
+
+size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+if ( hvm_copy_from_guest_phys(>bank_contents,
+  input_params_gpa + offset,
+  size) != 

[PATCH 01/10] viridian: move flush hypercall implementation into separate function

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST
that is currently inline in viridian_hypercall() into a new hvcall_flush()
function.

The new function returns Xen erro values which are then dealt with
appropriately. A return value of -ERESTART translates to viridian_hypercall()
returning HVM_HCALL_preempted. Other return values translate to status codes
and viridian_hypercall() returning HVM_HCALL_completed. Currently the only
values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating
success) or -EINVAL.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 130 ---
 1 file changed, 78 insertions(+), 52 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index dc7183a54627..f1a1b6a8af82 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -518,6 +518,69 @@ static bool need_flush(void *ctxt, struct vcpu *v)
 return vcpu_mask & (1ul << v->vcpu_id);
 }
 
+union hypercall_input {
+uint64_t raw;
+struct {
+uint16_t call_code;
+uint16_t fast:1;
+uint16_t rsvd1:15;
+uint16_t rep_count:12;
+uint16_t rsvd2:4;
+uint16_t rep_start:12;
+uint16_t rsvd3:4;
+};
+};
+
+union hypercall_output {
+uint64_t raw;
+struct {
+uint16_t result;
+uint16_t rsvd1;
+uint32_t rep_complete:12;
+uint32_t rsvd2:20;
+};
+};
+
+static int hvcall_flush(union hypercall_input *input,
+union hypercall_output *output,
+unsigned long input_params_gpa,
+unsigned long output_params_gpa)
+{
+struct {
+uint64_t address_space;
+uint64_t flags;
+uint64_t vcpu_mask;
+} input_params;
+
+/* These hypercalls should never use the fast-call convention. */
+if ( input->fast )
+return -EINVAL;
+
+/* Get input parameters. */
+if ( hvm_copy_from_guest_phys(_params, input_params_gpa,
+  sizeof(input_params)) != HVMTRANS_okay )
+return -EINVAL;
+
+/*
+ * It is not clear from the spec. if we are supposed to
+ * include current virtual CPU in the set or not in this case,
+ * so err on the safe side.
+ */
+if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+input_params.vcpu_mask = ~0ul;
+
+/*
+ * A false return means that another vcpu is currently trying
+ * a similar operation, so back off.
+ */
+if ( !paging_flush_tlb(need_flush, _params.vcpu_mask) )
+return -ERESTART;
+
+output->rep_complete = input->rep_count;
+
+return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
 struct vcpu *curr = current;
@@ -525,29 +588,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 int mode = hvm_guest_x86_mode(curr);
 unsigned long input_params_gpa, output_params_gpa;
 uint16_t status = HV_STATUS_SUCCESS;
-
-union hypercall_input {
-uint64_t raw;
-struct {
-uint16_t call_code;
-uint16_t fast:1;
-uint16_t rsvd1:15;
-uint16_t rep_count:12;
-uint16_t rsvd2:4;
-uint16_t rep_start:12;
-uint16_t rsvd3:4;
-};
-} input;
-
-union hypercall_output {
-uint64_t raw;
-struct {
-uint16_t result;
-uint16_t rsvd1;
-uint32_t rep_complete:12;
-uint32_t rsvd2:20;
-};
-} output = { 0 };
+union hypercall_input input;
+union hypercall_output output = {};
 
 ASSERT(is_viridian_domain(currd));
 
@@ -580,41 +622,25 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
 case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
 {
-struct {
-uint64_t address_space;
-uint64_t flags;
-uint64_t vcpu_mask;
-} input_params;
+int rc = hvcall_flush(, , input_params_gpa,
+  output_params_gpa);
 
-/* These hypercalls should never use the fast-call convention. */
-status = HV_STATUS_INVALID_PARAMETER;
-if ( input.fast )
+switch ( rc )
+{
+case 0:
 break;
 
-/* Get input parameters. */
-if ( hvm_copy_from_guest_phys(_params, input_params_gpa,
-  sizeof(input_params)) !=
- HVMTRANS_okay )
-break;
-
-/*
- * It is not clear from the spec. if we are supposed to
- * include current virtual CPU in the set or not in this case,
- * so err on the safe side.
- */
-if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-input_params.vcpu_mask = ~0ul;
-

[PATCH 05/10] viridian: use softirq batching in hvcall_ipi()

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
sending a IPIs to large number of processors. This patch modifies send_ipi()
(the worker function called by hvcall_ipi()) to also make use of the
mechanism when there multiple bits set the hypercall_vpmask. Hence a `nr`
field is added to the structure to track the number of set bits.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index 63f63093a513..765d53016c02 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -11,6 +11,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -509,6 +510,7 @@ void viridian_domain_deinit(struct domain *d)
 
 struct hypercall_vpmask {
 DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+unsigned int nr;
 };
 
 static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
@@ -516,21 +518,24 @@ static DEFINE_PER_CPU(struct hypercall_vpmask, 
hypercall_vpmask);
 static void vpmask_empty(struct hypercall_vpmask *vpmask)
 {
 bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+vpmask->nr = 0;
 }
 
 static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp)
 {
-__set_bit(vp, vpmask->mask);
+if ( !test_and_set_bit(vp, vpmask->mask) )
+vpmask->nr++;
 }
 
 static void vpmask_fill(struct hypercall_vpmask *vpmask)
 {
 bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+vpmask->nr = HVM_MAX_VCPUS;
 }
 
 static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
 {
-return test_bit(vp, vpmask->mask);
+return vpmask->nr && test_bit(vp, vpmask->mask);
 }
 
 static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)
@@ -644,8 +649,17 @@ static void send_ipi(struct hypercall_vpmask *vpmask, 
uint8_t vector)
 struct domain *currd = current->domain;
 unsigned int vp;
 
+if ( !vpmask->nr )
+return;
+
+if ( vpmask->nr > 1 )
+cpu_raise_softirq_batch_begin();
+
 for_each_vp ( vpmask, vp )
 vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+
+if ( vpmask->nr > 1 )
+cpu_raise_softirq_batch_finish();
 }
 
 static int hvcall_ipi(union hypercall_input *input,
-- 
2.20.1




[PATCH 04/10] viridian: use hypercall_vpmask in hvcall_ipi()

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

A subsequent patch will need to IPI a mask of virtual processors potentially
wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask
to allow hvcall_flush() to deal with such wide masks. This patch modifies
the implementation of hvcall_ipi() to make use of the same mask structures,
introducing a for_each_vp() macro to facilitate traversing a mask.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 43 ++--
 1 file changed, 35 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index 4ab1f14b2248..63f63093a513 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -533,6 +533,21 @@ static bool vpmask_test(struct hypercall_vpmask *vpmask, 
unsigned int vp)
 return test_bit(vp, vpmask->mask);
 }
 
+static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)
+{
+return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int 
vp)
+{
+return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);
+}
+
+#define for_each_vp(vpmask, vp) \
+   for ((vp) = vpmask_first(vpmask); \
+(vp) < HVM_MAX_VCPUS; \
+(vp) = vpmask_next(vpmask, vp))
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -624,15 +639,24 @@ static int hvcall_flush(union hypercall_input *input,
 return 0;
 }
 
+static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
+{
+struct domain *currd = current->domain;
+unsigned int vp;
+
+for_each_vp ( vpmask, vp )
+vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+}
+
 static int hvcall_ipi(union hypercall_input *input,
   union hypercall_output *output,
   unsigned long input_params_gpa,
   unsigned long output_params_gpa)
 {
-struct domain *currd = current->domain;
-struct vcpu *v;
+struct hypercall_vpmask *vpmask = _cpu(hypercall_vpmask);
 uint32_t vector;
 uint64_t vcpu_mask;
+unsigned int vp;
 
 /* Get input parameters. */
 if ( input->fast )
@@ -669,17 +693,20 @@ static int hvcall_ipi(union hypercall_input *input,
 if ( vector < 0x10 || vector > 0xff )
 return -EINVAL;
 
-for_each_vcpu ( currd, v )
+vpmask_empty(vpmask);
+for (vp = 0; vp < 64; vp++)
 {
-if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-return -EINVAL;
+if ( !vcpu_mask )
+break;
 
-if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-continue;
+if ( vcpu_mask & 1 )
+vpmask_set(vpmask, vp);
 
-vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+vcpu_mask >>= 1;
 }
 
+send_ipi(vpmask, vector);
+
 return 0;
 }
 
-- 
2.20.1




[PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and accessor functions...

2020-11-11 Thread Paul Durrant
From: Paul Durrant 

... and make use of them in hvcall_flush()/need_flush().

Subsequent patches will need to deal with virtual processor masks potentially
wider than 64 bits. Thus, to avoid using too much stack, this patch
introduces global per-cpu virtual processor masks and converts the
implementation of hvcall_flush() to use them.

Signed-off-by: Paul Durrant 
---
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/viridian/viridian.c | 51 +---
 1 file changed, 47 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
b/xen/arch/x86/hvm/viridian/viridian.c
index c4f720f58d6d..4ab1f14b2248 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -507,15 +507,41 @@ void viridian_domain_deinit(struct domain *d)
 XFREE(d->arch.hvm.viridian);
 }
 
+struct hypercall_vpmask {
+DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
+
+static void vpmask_empty(struct hypercall_vpmask *vpmask)
+{
+bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp)
+{
+__set_bit(vp, vpmask->mask);
+}
+
+static void vpmask_fill(struct hypercall_vpmask *vpmask)
+{
+bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
+{
+return test_bit(vp, vpmask->mask);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
  */
 static bool need_flush(void *ctxt, struct vcpu *v)
 {
-uint64_t vcpu_mask = *(uint64_t *)ctxt;
+struct hypercall_vpmask *vpmask = ctxt;
 
-return vcpu_mask & (1ul << v->vcpu_id);
+return vpmask_test(vpmask, v->vcpu_id);
 }
 
 union hypercall_input {
@@ -546,6 +572,7 @@ static int hvcall_flush(union hypercall_input *input,
 unsigned long input_params_gpa,
 unsigned long output_params_gpa)
 {
+struct hypercall_vpmask *vpmask = _cpu(hypercall_vpmask);
 struct {
 uint64_t address_space;
 uint64_t flags;
@@ -567,13 +594,29 @@ static int hvcall_flush(union hypercall_input *input,
  * so err on the safe side.
  */
 if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-input_params.vcpu_mask = ~0ul;
+vpmask_fill(vpmask);
+else
+{
+unsigned int vp;
+
+vpmask_empty(vpmask);
+for (vp = 0; vp < 64; vp++)
+{
+if ( !input_params.vcpu_mask )
+break;
+
+if ( input_params.vcpu_mask & 1 )
+vpmask_set(vpmask, vp);
+
+input_params.vcpu_mask >>= 1;
+}
+}
 
 /*
  * A false return means that another vcpu is currently trying
  * a similar operation, so back off.
  */
-if ( !paging_flush_tlb(need_flush, _params.vcpu_mask) )
+if ( !paging_flush_tlb(need_flush, vpmask) )
 return -ERESTART;
 
 output->rep_complete = input->rep_count;
-- 
2.20.1




Re: [PATCH] xen/x86: Work around Clang code generation bug with asm parameters

2020-11-11 Thread Andrew Cooper
On 11/11/2020 15:11, Jan Beulich wrote:
> On 11.11.2020 13:45, Andrew Cooper wrote:
>> Clang 9 and later don't handle the clobber of %r10 correctly in
>> _hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122
> Are you sure this is a bug?

Yes.

>  With ...
>
>>  #define _hypercall64_4(type, hcall, a1, a2, a3, a4) \
>>  ({  \
>> -long res, tmp__;\
>> -register long _a4 asm ("r10") = ((long)(a4));   \
>> +long res, _a1 = (long)(a1), _a2 = (long)(a2),   \
>> +_a3 = (long)(a3);   \
>> +register long _a4 asm ("r10") = (long)(a4); \
>>  asm volatile (  \
>>  "call hypercall_page + %c[offset]"  \
>> -: "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__), \
>> -  "=" (tmp__) ASM_CALL_CONSTRAINT \
> ... this we've requested "any register", while with ...
>
>> -: [offset] "i" (hcall * 32),\
>> -  "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)), \
>> -  "4" (_a4) \
> ... this we've asked for that specific register to be initialized
> from r10 (and without telling the compiler that r10 is going to
> change).

Consider applying that same reasoning to "1" instead of "4".  In that
case, a1 would no longer be bound to %rdi.

The use of "4" explicitly binds the input and the output, which includes
requiring them to be the same register.

Furthermore, LLVM tends to consider "not behaving in the same was as
GCC" a bug.

> Also, by what I would have hoped is convention meanwhile, the new
> macro local variables' names shouldn't start with an underscore,
> but instead perhaps end in one. But to be honest, despite knowing
> of the latent (albeit highly hypothetical) issue, each time I
> find myself making such a comment I'm one tiny step closer to
> giving up.

Convention is not created by you getting irritated at others getting
irritated at you for requesting bizarre and unusual changes in submitted
code, or by continuing to point things out, knowing that others
specifically disagree with your reasoning in this case.

Convention is created when there is general agreement over the technical
merits of writing code in one particular way vs the alternatives, *and*
its written down somewhere, so this argument does not continue any further.

There is no restrictions or guidance in the coding style to prefer one
form over another, therefore anything is acceptable.

~Andrew



Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling

2020-11-11 Thread Oleksandr



On 11.11.20 21:27, Julien Grall wrote:

Hi Oleksandr,


Hi Julien.




On 11/11/2020 00:03, Oleksandr wrote:


On 16.10.20 11:41, Julien Grall wrote:

On 16/10/2020 07:29, Jan Beulich wrote:

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
@@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct 
p2m_domain *p2m,

   */
  if ( p2m_is_valid(orig_pte) &&
   !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
+    {
+#ifdef CONFIG_IOREQ_SERVER
+    if ( domain_has_ioreq_server(p2m->domain) &&
+ (p2m->domain == current->domain) && 
p2m_is_ram(orig_pte.p2m.type) )

+    p2m->domain->qemu_mapcache_invalidate = true;
+#endif
  p2m_free_entry(p2m, orig_pte, level);
+    }


For all I have to say here, please bear in mind that I don't know
the internals of Arm memory management.

The first odd thing here the merely MFN-based condition. It may
well be that's sufficient, if there's no way to get a "not present"
entry with an MFN matching any valid MFN. (This isn't just with
your addition, but even before.
Invalid entries are always zeroed. So in theory the problem could 
arise if MFN 0 used in the guest. It should not be possible on 
staging, but I agree this should be fixed.




Given how p2m_free_entry() works (or is supposed to work in the
long run), is the new code you add guaranteed to only alter leaf
entries?


This path may also be called with tables. I think we want to move 
the check in p2m_free_entry() so we can find the correct leaf type.


Well, but inside p2m_free_entry() we don't have a new entry in order 
to check whether new MFN is actually different. An extra arg only 
comes to mind...

Aside the recursive call, there are two users for p2m_free_entry():
  - When we fail to shatter a superpage in OOM
  - When the entry is replaced by an entry with a different base

I wouldn't be too concerned to send spurious mapcache invalidation in 
an error path. So I don't think you need to know the new entry.


Thank you for the clarification, sounds reasonable.




What you need to know if the type of the leaf.


Yes, to check whether it is a RAM page.


--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling

2020-11-11 Thread Julien Grall

Hi Oleksandr,

On 11/11/2020 00:03, Oleksandr wrote:


On 16.10.20 11:41, Julien Grall wrote:

On 16/10/2020 07:29, Jan Beulich wrote:

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
@@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct p2m_domain 
*p2m,

   */
  if ( p2m_is_valid(orig_pte) &&
   !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
+    {
+#ifdef CONFIG_IOREQ_SERVER
+    if ( domain_has_ioreq_server(p2m->domain) &&
+ (p2m->domain == current->domain) && 
p2m_is_ram(orig_pte.p2m.type) )

+    p2m->domain->qemu_mapcache_invalidate = true;
+#endif
  p2m_free_entry(p2m, orig_pte, level);
+    }


For all I have to say here, please bear in mind that I don't know
the internals of Arm memory management.

The first odd thing here the merely MFN-based condition. It may
well be that's sufficient, if there's no way to get a "not present"
entry with an MFN matching any valid MFN. (This isn't just with
your addition, but even before.
Invalid entries are always zeroed. So in theory the problem could 
arise if MFN 0 used in the guest. It should not be possible on 
staging, but I agree this should be fixed.




Given how p2m_free_entry() works (or is supposed to work in the
long run), is the new code you add guaranteed to only alter leaf
entries?


This path may also be called with tables. I think we want to move the 
check in p2m_free_entry() so we can find the correct leaf type.


Well, but inside p2m_free_entry() we don't have a new entry in order to 
check whether new MFN is actually different. An extra arg only comes to 
mind...

Aside the recursive call, there are two users for p2m_free_entry():
  - When we fail to shatter a superpage in OOM
  - When the entry is replaced by an entry with a different base

I wouldn't be too concerned to send spurious mapcache invalidation in an 
error path. So I don't think you need to know the new entry.


What you need to know if the type of the leaf.

Cheers,

--
Julien Grall



[xen-4.11-testing test] 156634: regressions - FAIL

2020-11-11 Thread osstest service owner
flight 156634 xen-4.11-testing real [real]
flight 156686 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156634/
http://logs.test-lab.xenproject.org/osstest/logs/156686/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl  13 debian-fixup fail REGR. vs. 156397
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat 
fail REGR. vs. 156397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail like 156397
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail like 156397
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 156397
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 156397
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 156397
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156397
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 156397
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 156397
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 156397
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 156397
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 156397
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 156397
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 156397
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  1447d449fab7e48c85faf83951842bb60d7dabe5
baseline version:
 xen  

Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()

2020-11-11 Thread Oleksandr



On 11.11.20 18:28, Paul Durrant wrote:

Hi Paul.




d->ioreq_server.server[id] = s;
+
+if ( s )
+d->ioreq_server.nr_servers++;
+else
+d->ioreq_server.nr_servers--;
}

#define GET_IOREQ_SERVER(d, id) \
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 7b03ab5..0679fef 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -55,6 +55,20 @@ struct ioreq_server {
uint8_tbufioreq_handling;
};

+#ifdef CONFIG_IOREQ_SERVER
+static inline bool domain_has_ioreq_server(const struct domain *d)
+{
+ASSERT((current->domain == d) || atomic_read(>pause_count));
+

This seems like an odd place to put such an assertion.

I might miss something or interpreted incorrectly but these asserts are
the result of how I understood the review comment on previous version [1].

I will copy a comment here for the convenience:
"This is safe only when d == current->domain and it's not paused,
or when they're distinct and d is paused. Otherwise the result is
stale before the caller can inspect it. This wants documenting by
at least a comment, but perhaps better by suitable ASSERT()s."

The way his reply was worded, I think Paul was wondering about the
place where you put the assertion, not what you actually assert.

Shall I put the assertion at the call sites of this helper instead?

Since Paul raised the question, I expect this is a question to him
rather than me?

If it is indeed a question for me then yes, put the assertion where it is clear 
why it is needed. domain_has_ioreq_server() is essentially a trivial accessor 
function; it's not the appropriate place.


Got it. Thank you for the clarification.

--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()

2020-11-11 Thread Oleksandr



On 11.11.20 15:27, Jan Beulich wrote:

Hi Jan.




}

#define GET_IOREQ_SERVER(d, id) \
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 7b03ab5..0679fef 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -55,6 +55,20 @@ struct ioreq_server {
uint8_tbufioreq_handling;
};

+#ifdef CONFIG_IOREQ_SERVER
+static inline bool domain_has_ioreq_server(const struct domain *d)
+{
+ASSERT((current->domain == d) || atomic_read(>pause_count));
+

This seems like an odd place to put such an assertion.

I might miss something or interpreted incorrectly but these asserts are
the result of how I understood the review comment on previous version [1].

I will copy a comment here for the convenience:
"This is safe only when d == current->domain and it's not paused,
or when they're distinct and d is paused. Otherwise the result is
stale before the caller can inspect it. This wants documenting by
at least a comment, but perhaps better by suitable ASSERT()s."

The way his reply was worded, I think Paul was wondering about the
place where you put the assertion, not what you actually assert.

Shall I put the assertion at the call sites of this helper instead?

Since Paul raised the question, I expect this is a question to him
rather than me?
Yes, it was primarily a question to Paul, but I wanted to hear your 
opinion as well. Sorry for the confusion.


--
Regards,

Oleksandr Tyshchenko




RE: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()

2020-11-11 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich 
> Sent: 11 November 2020 13:28
> To: Oleksandr 
> Cc: 'Oleksandr Tyshchenko' ; 'Stefano 
> Stabellini'
> ; 'Julien Grall' ; 'Volodymyr Babchuk'
> ; 'Andrew Cooper' ; 
> 'George Dunlap'
> ; 'Ian Jackson' ; 'Wei Liu' 
> ; 'Julien Grall'
> ; p...@xen.org; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
> 
> On 11.11.2020 09:41, Oleksandr wrote:
> >
> > On 11.11.20 10:08, Jan Beulich wrote:
> >
> > Hi Jan
> >
> >> On 10.11.2020 21:53, Oleksandr wrote:
> >>> On 20.10.20 13:51, Paul Durrant wrote:
> >>>
> >>> Hi Paul.
> >>>
> >>> Sorry for the late response.
> >>>
> > -Original Message-
> > From: Oleksandr Tyshchenko 
> > Sent: 15 October 2020 17:44
> > To: xen-devel@lists.xenproject.org
> > Cc: Oleksandr Tyshchenko ; Stefano 
> > Stabellini
> ;
> > Julien Grall ; Volodymyr Babchuk 
> > ; Andrew Cooper
> > ; George Dunlap ; 
> > Ian Jackson
> > ; Jan Beulich ; Wei Liu 
> > ; Paul Durrant
> > ; Julien Grall 
> > Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
> >
> > From: Oleksandr Tyshchenko 
> >
> > This patch introduces a helper the main purpose of which is to check
> > if a domain is using IOREQ server(s).
> >
> > On Arm the current benefit is to avoid calling handle_io_completion()
> > (which implies iterating over all possible IOREQ servers anyway)
> > on every return in leave_hypervisor_to_guest() if there is no active
> > servers for the particular domain.
> > Also this helper will be used by one of the subsequent patches on Arm.
> >
> > This involves adding an extra per-domain variable to store the count
> > of servers in use.
> >
> > Signed-off-by: Oleksandr Tyshchenko 
> > CC: Julien Grall 
> >
> > ---
> > Please note, this is a split/cleanup/hardening of Julien's PoC:
> > "Add support for Guest IO forwarding to a device emulator"
> >
> > Changes RFC -> V1:
> >  - new patch
> >
> > Changes V1 -> V2:
> >  - update patch description
> >  - guard helper with CONFIG_IOREQ_SERVER
> >  - remove "hvm" prefix
> >  - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
> >  - put suitable ASSERT()s
> >  - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in 
> > set_ioreq_server()
> >  - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
> > ---
> >xen/arch/arm/traps.c| 15 +--
> >xen/common/ioreq.c  |  7 ++-
> >xen/include/xen/ioreq.h | 14 ++
> >xen/include/xen/sched.h |  1 +
> >4 files changed, 30 insertions(+), 7 deletions(-)
> >
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index 507c095..a8f5fdf 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
> >struct vcpu *v = current;
> >
> >#ifdef CONFIG_IOREQ_SERVER
> > -bool handled;
> > +if ( domain_has_ioreq_server(v->domain) )
> > +{
> > +bool handled;
> >
> > -local_irq_enable();
> > -handled = handle_io_completion(v);
> > -local_irq_disable();
> > +local_irq_enable();
> > +handled = handle_io_completion(v);
> > +local_irq_disable();
> >
> > -if ( !handled )
> > -return true;
> > +if ( !handled )
> > +return true;
> > +}
> >#endif
> >
> >if ( likely(!v->arch.need_flush_to_ram) )
> > diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> > index bcd4961..a72bc0e 100644
> > --- a/xen/common/ioreq.c
> > +++ b/xen/common/ioreq.c
> > @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, 
> > unsigned int id,
> > struct ioreq_server *s)
> >{
> >ASSERT(id < MAX_NR_IOREQ_SERVERS);
> > -ASSERT(!s || !d->ioreq_server.server[id]);
> > +ASSERT(d->ioreq_server.server[id] ? !s : !!s);
>  That looks odd. How about ASSERT(!s ^ !d->ioreq_server.server[id])?
> >>> ok, looks like it will work.
> >>>
> >>>
>  Paul
> 
> >d->ioreq_server.server[id] = s;
> > +
> > +if ( s )
> > +d->ioreq_server.nr_servers++;
> > +else
> > +d->ioreq_server.nr_servers--;
> >}
> >
> >#define GET_IOREQ_SERVER(d, id) \
> > diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
> > index 7b03ab5..0679fef 100644
> > --- a/xen/include/xen/ioreq.h
> > +++ b/xen/include/xen/ioreq.h
> > @@ -55,6 +55,20 @@ struct ioreq_server {
> >uint8_tbufioreq_handling;
> >};

Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending virq to guest

2020-11-11 Thread Jürgen Groß

On 11.11.20 16:45, Jan Beulich wrote:

On 09.11.2020 10:50, Juergen Gross wrote:

@@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
model->free_msr(v);
  }
  
+bool nmi_oprofile_send_virq(void)

+{
+   struct vcpu *v = this_cpu(nmi_cont_vcpu);
+
+   if ( v )
+   send_guest_vcpu_virq(v, VIRQ_XENOPROF);
+
+   this_cpu(nmi_cont_vcpu) = NULL;


What if, by the time we make it here, a 2nd NMI has arrived? I
agree the next overflow interrupt shouldn't arrive this
quickly, but I also think you want to zap the per-CPU variable
first here, and ...


How could that happen? This function is activated only from NMI
context in case the NMI happened in guest mode. And it will be
executed with higher priority than any guest, so there is a zero
chance another NMI in guest mode can happen in between.

I can add a comment in this regard if you want.




+
+   return v;
+}
+
  static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
  {
int xen_mode, ovf;
  
  	ovf = model->check_ctrs(cpu, _msrs[cpu], regs);

xen_mode = ring_0(regs);
-   if ( ovf && is_active(current->domain) && !xen_mode )
-   send_guest_vcpu_virq(current, VIRQ_XENOPROF);
+   if ( ovf && is_active(current->domain) && !xen_mode ) {
+   this_cpu(nmi_cont_vcpu) = current;


... avoid overwriting any non-NULL value here. That's then of
course still not closing the window, but has (imo) overall
better behavior.

Also, style-wise, going through the file it looks to be mainly
Linux style, so may I suggest your additions / changes to be
done that way, rather than extending use of this funny mixed
style?


Works for me.


Juergen


OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending virq to guest

2020-11-11 Thread Jan Beulich
On 09.11.2020 10:50, Juergen Gross wrote:
> @@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
>   model->free_msr(v);
>  }
>  
> +bool nmi_oprofile_send_virq(void)
> +{
> + struct vcpu *v = this_cpu(nmi_cont_vcpu);
> +
> + if ( v )
> + send_guest_vcpu_virq(v, VIRQ_XENOPROF);
> +
> + this_cpu(nmi_cont_vcpu) = NULL;

What if, by the time we make it here, a 2nd NMI has arrived? I
agree the next overflow interrupt shouldn't arrive this
quickly, but I also think you want to zap the per-CPU variable
first here, and ...

> +
> + return v;
> +}
> +
>  static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
>  {
>   int xen_mode, ovf;
>  
>   ovf = model->check_ctrs(cpu, _msrs[cpu], regs);
>   xen_mode = ring_0(regs);
> - if ( ovf && is_active(current->domain) && !xen_mode )
> - send_guest_vcpu_virq(current, VIRQ_XENOPROF);
> + if ( ovf && is_active(current->domain) && !xen_mode ) {
> + this_cpu(nmi_cont_vcpu) = current;

... avoid overwriting any non-NULL value here. That's then of
course still not closing the window, but has (imo) overall
better behavior.

Also, style-wise, going through the file it looks to be mainly
Linux style, so may I suggest your additions / changes to be
done that way, rather than extending use of this funny mixed
style?

Jan



Re: [PATCH v4 1/3] xen/x86: add nmi continuation framework

2020-11-11 Thread Jürgen Groß

On 11.11.20 16:37, Jan Beulich wrote:

On 09.11.2020 10:50, Juergen Gross wrote:

Actions in NMI context are rather limited as e.g. locking is rather
fragile.

Add a framework to continue processing in normal interrupt context
after leaving NMI processing.

This is done by a high priority interrupt vector triggered via a
self IPI from NMI context, which will then call the continuation
function specified during NMI handling.

Signed-off-by: Juergen Gross 


Reviewed-by: Jan Beulich 
with one further adjustment request:


@@ -1799,6 +1800,24 @@ void unset_nmi_callback(void)
  nmi_callback = dummy_nmi_callback;
  }
  
+bool nmi_check_continuation(void)

+{
+bool ret = false;
+
+return ret;
+}
+
+void trigger_nmi_continuation(void)
+{
+/*
+ * Issue a self-IPI. Handling is done in spurious_interrupt().
+ * NMI could have happened in IPI sequence, so wait for ICR being idle
+ * again before leaving NMI handler.
+ */
+send_IPI_self(SPURIOUS_APIC_VECTOR);
+apic_wait_icr_idle();
+}


This additionally relies on send_IPI_self_legacy() calling
send_IPI_shortcut(), rather than e.g. resolving the local CPU
number to a destination ID. I think this wants saying maybe
here, but more importantly in that function.


Okay.


Juergen



OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH v4 1/3] xen/x86: add nmi continuation framework

2020-11-11 Thread Jan Beulich
On 09.11.2020 10:50, Juergen Gross wrote:
> Actions in NMI context are rather limited as e.g. locking is rather
> fragile.
> 
> Add a framework to continue processing in normal interrupt context
> after leaving NMI processing.
> 
> This is done by a high priority interrupt vector triggered via a
> self IPI from NMI context, which will then call the continuation
> function specified during NMI handling.
> 
> Signed-off-by: Juergen Gross 

Reviewed-by: Jan Beulich 
with one further adjustment request:

> @@ -1799,6 +1800,24 @@ void unset_nmi_callback(void)
>  nmi_callback = dummy_nmi_callback;
>  }
>  
> +bool nmi_check_continuation(void)
> +{
> +bool ret = false;
> +
> +return ret;
> +}
> +
> +void trigger_nmi_continuation(void)
> +{
> +/*
> + * Issue a self-IPI. Handling is done in spurious_interrupt().
> + * NMI could have happened in IPI sequence, so wait for ICR being idle
> + * again before leaving NMI handler.
> + */
> +send_IPI_self(SPURIOUS_APIC_VECTOR);
> +apic_wait_icr_idle();
> +}

This additionally relies on send_IPI_self_legacy() calling
send_IPI_shortcut(), rather than e.g. resolving the local CPU
number to a destination ID. I think this wants saying maybe
here, but more importantly in that function.

Jan



Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list

2020-11-11 Thread Oleksandr Andrushchenko

On 11/11/20 5:25 PM, Jan Beulich wrote:
> On 11.11.2020 16:13, Oleksandr Andrushchenko wrote:
>> On 11/11/20 5:03 PM, Roger Pau Monné wrote:
>>> On Wed, Nov 11, 2020 at 02:38:47PM +, Oleksandr Andrushchenko wrote:
 On 11/11/20 3:53 PM, Roger Pau Monné wrote:
> On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko 
>>
>> The original code depends on pciback to manage assignable device list.
>> The functionality which is implemented by the pciback and the toolstack
>> and which is relevant/missing/needed for ARM:
>>
>> 1. pciback is used as a database for assignable PCI devices, e.g. xl
>>   pci-assignable-{add|remove|list} manipulates that list. So, 
>> whenever the
>>   toolstack needs to know which PCI devices can be passed through it 
>> reads
>>   that from the relevant sysfs entries of the pciback.
>>
>> 2. pciback is used to hold the unbound PCI devices, e.g. when passing 
>> through
>>   a PCI device it needs to be unbound from the relevant device 
>> driver and bound
>>   to pciback (strictly speaking it is not required that the device 
>> is bound to
>>   pciback, but pciback is again used as a database of the passed 
>> through PCI
>>   devices, so we can re-bind the devices back to their original 
>> drivers when
>>   guest domain shuts down)
>>
>> 1. As ARM doesn't use pciback implement the above with additional 
>> sysctls:
>> - XEN_SYSCTL_pci_device_set_assigned
> I don't see the point in having this sysfs, Xen already knows when a
> device is assigned because the XEN_DOMCTL_assign_device hypercall is
> used.
 But how does the toolstack know about that? When the toolstack needs to

 list/know all assigned devices it queries pciback's sysfs entries. So, with

 XEN_DOMCTL_assign_device we make that knowledge available to Xen,

 but there are no means for the toolstack to get it back.
>>> But the toolstack will figure out whether a device is assigned or
>>> not by using
>>> XEN_SYSCTL_pci_device_get_assigned/XEN_SYSCTL_pci_device_enum_assigned?
>>>
>>> AFAICT XEN_SYSCTL_pci_device_set_assigned tells Xen a device has been
>>> assigned, but Xen should already know it because
>>> XEN_DOMCTL_assign_device would have been used to assign the device?
>> Ah, I misunderstood you then. So, we only want to drop 
>> XEN_DOMCTL_assign_device
>>
>> and keep the rest.
> Was this a typo? Why would you want to drop XEN_DOMCTL_assign_device?

Indeed it was: s/XEN_DOMCTL_assign_device/XEN_SYSCTL_pci_device_set_assigned

Sorry for confusion

>
> Jan

Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list

2020-11-11 Thread Jan Beulich
On 11.11.2020 16:13, Oleksandr Andrushchenko wrote:
> On 11/11/20 5:03 PM, Roger Pau Monné wrote:
>> On Wed, Nov 11, 2020 at 02:38:47PM +, Oleksandr Andrushchenko wrote:
>>> On 11/11/20 3:53 PM, Roger Pau Monné wrote:
 On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
>
> The original code depends on pciback to manage assignable device list.
> The functionality which is implemented by the pciback and the toolstack
> and which is relevant/missing/needed for ARM:
>
> 1. pciback is used as a database for assignable PCI devices, e.g. xl
>  pci-assignable-{add|remove|list} manipulates that list. So, whenever 
> the
>  toolstack needs to know which PCI devices can be passed through it 
> reads
>  that from the relevant sysfs entries of the pciback.
>
> 2. pciback is used to hold the unbound PCI devices, e.g. when passing 
> through
>  a PCI device it needs to be unbound from the relevant device driver 
> and bound
>  to pciback (strictly speaking it is not required that the device is 
> bound to
>  pciback, but pciback is again used as a database of the passed 
> through PCI
>  devices, so we can re-bind the devices back to their original 
> drivers when
>  guest domain shuts down)
>
> 1. As ARM doesn't use pciback implement the above with additional sysctls:
>- XEN_SYSCTL_pci_device_set_assigned
 I don't see the point in having this sysfs, Xen already knows when a
 device is assigned because the XEN_DOMCTL_assign_device hypercall is
 used.
>>> But how does the toolstack know about that? When the toolstack needs to
>>>
>>> list/know all assigned devices it queries pciback's sysfs entries. So, with
>>>
>>> XEN_DOMCTL_assign_device we make that knowledge available to Xen,
>>>
>>> but there are no means for the toolstack to get it back.
>> But the toolstack will figure out whether a device is assigned or
>> not by using
>> XEN_SYSCTL_pci_device_get_assigned/XEN_SYSCTL_pci_device_enum_assigned?
>>
>> AFAICT XEN_SYSCTL_pci_device_set_assigned tells Xen a device has been
>> assigned, but Xen should already know it because
>> XEN_DOMCTL_assign_device would have been used to assign the device?
> 
> Ah, I misunderstood you then. So, we only want to drop 
> XEN_DOMCTL_assign_device
> 
> and keep the rest.

Was this a typo? Why would you want to drop XEN_DOMCTL_assign_device?

Jan



Re: [PATCH 11/12] xen/hypfs: add scheduling granularity entry to cpupool entries

2020-11-11 Thread Dario Faggioli
On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Add a "sched-gran" entry to the per-cpupool hypfs directories.
> 
> For now make this entry read-only and let it contain one of the
> strings "cpu", "core" or "socket".
> 
> Signed-off-by: Juergen Gross 
>
Acked-by: Dario Faggioli 

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list

2020-11-11 Thread Oleksandr Andrushchenko
On 11/11/20 5:03 PM, Roger Pau Monné wrote:
> On Wed, Nov 11, 2020 at 02:38:47PM +, Oleksandr Andrushchenko wrote:
>> On 11/11/20 3:53 PM, Roger Pau Monné wrote:
>>> On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
 From: Oleksandr Andrushchenko 

 The original code depends on pciback to manage assignable device list.
 The functionality which is implemented by the pciback and the toolstack
 and which is relevant/missing/needed for ARM:

 1. pciback is used as a database for assignable PCI devices, e.g. xl
  pci-assignable-{add|remove|list} manipulates that list. So, whenever 
 the
  toolstack needs to know which PCI devices can be passed through it 
 reads
  that from the relevant sysfs entries of the pciback.

 2. pciback is used to hold the unbound PCI devices, e.g. when passing 
 through
  a PCI device it needs to be unbound from the relevant device driver 
 and bound
  to pciback (strictly speaking it is not required that the device is 
 bound to
  pciback, but pciback is again used as a database of the passed 
 through PCI
  devices, so we can re-bind the devices back to their original drivers 
 when
  guest domain shuts down)

 1. As ARM doesn't use pciback implement the above with additional sysctls:
- XEN_SYSCTL_pci_device_set_assigned
>>> I don't see the point in having this sysfs, Xen already knows when a
>>> device is assigned because the XEN_DOMCTL_assign_device hypercall is
>>> used.
>> But how does the toolstack know about that? When the toolstack needs to
>>
>> list/know all assigned devices it queries pciback's sysfs entries. So, with
>>
>> XEN_DOMCTL_assign_device we make that knowledge available to Xen,
>>
>> but there are no means for the toolstack to get it back.
> But the toolstack will figure out whether a device is assigned or
> not by using
> XEN_SYSCTL_pci_device_get_assigned/XEN_SYSCTL_pci_device_enum_assigned?
>
> AFAICT XEN_SYSCTL_pci_device_set_assigned tells Xen a device has been
> assigned, but Xen should already know it because
> XEN_DOMCTL_assign_device would have been used to assign the device?

Ah, I misunderstood you then. So, we only want to drop XEN_DOMCTL_assign_device

and keep the rest.

>
- XEN_SYSCTL_pci_device_get_assigned
- XEN_SYSCTL_pci_device_enum_assigned
 2. Extend struct pci_dev to hold assignment state.
>>> I'm not really found of this, the hypervisor is no place to store a
>>> database like this, unless it's strictly needed.
>> I do agree and it was previously discussed a bit
>>> IMO the right implementation here would be to split Linux pciback into
>>> two different drivers:
>>>
>>>- The pv-pci backend for doing passthrough to classic PV guests.
>> Ok
>>>- The rest of pciback: device reset, hand-holding driver for devices
>>>  to be assigned and database.
>> These and assigned devices list seem to be the complete set which
>>
>> is needed by the toolstack on ARM. All other functionality provided by
>>
>> pciback is not needed for ARM.
>>
>> Jan was saying [1] that we might still use pciback as is, but simply use only
>>
>> the functionality we need.
>>
>>> I think there must be something similar in KVM that performs the tasks
>>> of my last point, maybe we could piggyback on it?
>> I promised to look at it. I owe this
>>> If we want to go the route proposed by this patch, ie: Xen performing
>>> the functions of pciback you would also have to move the PCI reset
>>> code to Xen, so that you can fully manage the PCI devices from Xen.
>> In case of dom0less this would be the case: no pciback, no Domain-0
> But for dom0less there's no need for any database of assignable
> devices, nor the need to perform pci device resets, as it's all
> assigned at boot time and then never modified?
You are right
>
> Roger.

Thank you,

Oleksandr


Re: [PATCH 10/12] xen/hypfs: add cpupool directories

2020-11-11 Thread Dario Faggioli
On Wed, 2020-11-11 at 16:00 +0100, Jürgen Groß wrote:
> On 11.11.20 15:56, Jan Beulich wrote:
> > On 11.11.2020 15:51, Dario Faggioli wrote:
> > 
> > 
> > Having a hypfs_add_dir() stub would also allow to achieve this, and
> > then, going forward, perhaps also elsewhere.
> 
> I thought about that. This would require to be macro for the stub
> case,
> but I don't think this is a major problem.
> 
Yes, I thought about "stub-bing" at the hypfs_add_dir() level, but we
need a macro for that, for dealing with the parameters.

Also, it's not like having that stub would prevent having #ifdef-s at
all in this file. So that's why I thought about a local stub.

But sure, if you go for the generic macro stub, I'm fine with that too.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH] xen/x86: Work around Clang code generation bug with asm parameters

2020-11-11 Thread Jan Beulich
On 11.11.2020 13:45, Andrew Cooper wrote:
> Clang 9 and later don't handle the clobber of %r10 correctly in
> _hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122

Are you sure this is a bug? With ...

>  #define _hypercall64_4(type, hcall, a1, a2, a3, a4) \
>  ({  \
> -long res, tmp__;\
> -register long _a4 asm ("r10") = ((long)(a4));   \
> +long res, _a1 = (long)(a1), _a2 = (long)(a2),   \
> +_a3 = (long)(a3);   \
> +register long _a4 asm ("r10") = (long)(a4); \
>  asm volatile (  \
>  "call hypercall_page + %c[offset]"  \
> -: "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__), \
> -  "=" (tmp__) ASM_CALL_CONSTRAINT \

... this we've requested "any register", while with ...

> -: [offset] "i" (hcall * 32),\
> -  "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)), \
> -  "4" (_a4) \

... this we've asked for that specific register to be initialized
from r10 (and without telling the compiler that r10 is going to
change).

Also, by what I would have hoped is convention meanwhile, the new
macro local variables' names shouldn't start with an underscore,
but instead perhaps end in one. But to be honest, despite knowing
of the latent (albeit highly hypothetical) issue, each time I
find myself making such a comment I'm one tiny step closer to
giving up.

Anyway, with at least title and description changed (or your view
clarified verbally)
Reviewed-by: Jan Beulich 

Jan



Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list

2020-11-11 Thread Roger Pau Monné
On Wed, Nov 11, 2020 at 02:38:47PM +, Oleksandr Andrushchenko wrote:
> On 11/11/20 3:53 PM, Roger Pau Monné wrote:
> > On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko 
> >>
> >> The original code depends on pciback to manage assignable device list.
> >> The functionality which is implemented by the pciback and the toolstack
> >> and which is relevant/missing/needed for ARM:
> >>
> >> 1. pciback is used as a database for assignable PCI devices, e.g. xl
> >> pci-assignable-{add|remove|list} manipulates that list. So, whenever 
> >> the
> >> toolstack needs to know which PCI devices can be passed through it 
> >> reads
> >> that from the relevant sysfs entries of the pciback.
> >>
> >> 2. pciback is used to hold the unbound PCI devices, e.g. when passing 
> >> through
> >> a PCI device it needs to be unbound from the relevant device driver 
> >> and bound
> >> to pciback (strictly speaking it is not required that the device is 
> >> bound to
> >> pciback, but pciback is again used as a database of the passed through 
> >> PCI
> >> devices, so we can re-bind the devices back to their original drivers 
> >> when
> >> guest domain shuts down)
> >>
> >> 1. As ARM doesn't use pciback implement the above with additional sysctls:
> >>   - XEN_SYSCTL_pci_device_set_assigned
> > I don't see the point in having this sysfs, Xen already knows when a
> > device is assigned because the XEN_DOMCTL_assign_device hypercall is
> > used.
> 
> But how does the toolstack know about that? When the toolstack needs to
> 
> list/know all assigned devices it queries pciback's sysfs entries. So, with
> 
> XEN_DOMCTL_assign_device we make that knowledge available to Xen,
> 
> but there are no means for the toolstack to get it back.

But the toolstack will figure out whether a device is assigned or
not by using
XEN_SYSCTL_pci_device_get_assigned/XEN_SYSCTL_pci_device_enum_assigned?

AFAICT XEN_SYSCTL_pci_device_set_assigned tells Xen a device has been
assigned, but Xen should already know it because
XEN_DOMCTL_assign_device would have been used to assign the device?

> >
> >>   - XEN_SYSCTL_pci_device_get_assigned
> >>   - XEN_SYSCTL_pci_device_enum_assigned
> >> 2. Extend struct pci_dev to hold assignment state.
> > I'm not really found of this, the hypervisor is no place to store a
> > database like this, unless it's strictly needed.
> I do agree and it was previously discussed a bit
> >
> > IMO the right implementation here would be to split Linux pciback into
> > two different drivers:
> >
> >   - The pv-pci backend for doing passthrough to classic PV guests.
> Ok
> >   - The rest of pciback: device reset, hand-holding driver for devices
> > to be assigned and database.
> 
> These and assigned devices list seem to be the complete set which
> 
> is needed by the toolstack on ARM. All other functionality provided by
> 
> pciback is not needed for ARM.
> 
> Jan was saying [1] that we might still use pciback as is, but simply use only
> 
> the functionality we need.
> 
> >
> > I think there must be something similar in KVM that performs the tasks
> > of my last point, maybe we could piggyback on it?
> I promised to look at it. I owe this
> >
> > If we want to go the route proposed by this patch, ie: Xen performing
> > the functions of pciback you would also have to move the PCI reset
> > code to Xen, so that you can fully manage the PCI devices from Xen.
> In case of dom0less this would be the case: no pciback, no Domain-0

But for dom0less there's no need for any database of assignable
devices, nor the need to perform pci device resets, as it's all
assigned at boot time and then never modified?

Roger.



Re: [PATCH 10/12] xen/hypfs: add cpupool directories

2020-11-11 Thread Jürgen Groß

On 11.11.20 15:56, Jan Beulich wrote:

On 11.11.2020 15:51, Dario Faggioli wrote:

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:

Add /cpupool/ directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be
implemented.

Signed-off-by: Juergen Gross 


So, I'm almost sold... Just one comment:


--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
  
  cpupool_gran_init();
  
+#ifdef CONFIG_HYPFS

+    hypfs_add_dir(_root, _dir, true);
+#endif
+

What would you think about doing this in an helper function
(hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an
empty stub if !CONFIG_HYPFS ?

That will save us from having the #ifdef-s again here.


Having a hypfs_add_dir() stub would also allow to achieve this, and
then, going forward, perhaps also elsewhere.


I thought about that. This would require to be macro for the stub case,
but I don't think this is a major problem.

Currently there are no other places requiring a stub, but in future this
might change.


Juergen


OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH 10/12] xen/hypfs: add cpupool directories

2020-11-11 Thread Dario Faggioli
On Wed, 2020-11-11 at 15:56 +0100, Jürgen Groß wrote:
> On 11.11.20 15:51, Dario Faggioli wrote:
> > 
> > What would you think about doing this in an helper function
> > (hypfs_cpupool_init() ?), implemented inside the above #ifdef and
> > as an
> > empty stub if !CONFIG_HYPFS ?
> > 
> > That will save us from having the #ifdef-s again here.
> > 
> > I'm asking because it's certainly not critical and I don't have a
> > too
> > strong opinion about it. But I do think the code would look better.
> 
> I'm fine either way.
> 
> In case nobody objects I'll change it.
> 
Ok, cool. If you do that and resend, and that's the only change, you
can add:

Reviewed-by: Dario Faggioli 

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH 10/12] xen/hypfs: add cpupool directories

2020-11-11 Thread Jan Beulich
On 11.11.2020 15:51, Dario Faggioli wrote:
> On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
>> Add /cpupool/ directories to hypfs. Those are completely
>> dynamic, so the related hypfs access functions need to be
>> implemented.
>>
>> Signed-off-by: Juergen Gross 
>>
> So, I'm almost sold... Just one comment:
> 
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
>>  
>>  cpupool_gran_init();
>>  
>> +#ifdef CONFIG_HYPFS
>> +    hypfs_add_dir(_root, _dir, true);
>> +#endif
>> +
> What would you think about doing this in an helper function
> (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an
> empty stub if !CONFIG_HYPFS ?
> 
> That will save us from having the #ifdef-s again here.

Having a hypfs_add_dir() stub would also allow to achieve this, and
then, going forward, perhaps also elsewhere.

Jan



Re: [PATCH 10/12] xen/hypfs: add cpupool directories

2020-11-11 Thread Jürgen Groß

On 11.11.20 15:51, Dario Faggioli wrote:

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:

Add /cpupool/ directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be
implemented.

Signed-off-by: Juergen Gross 


So, I'm almost sold... Just one comment:


--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
  
  cpupool_gran_init();
  
+#ifdef CONFIG_HYPFS

+    hypfs_add_dir(_root, _dir, true);
+#endif
+

What would you think about doing this in an helper function
(hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an
empty stub if !CONFIG_HYPFS ?

That will save us from having the #ifdef-s again here.

I'm asking because it's certainly not critical and I don't have a too
strong opinion about it. But I do think the code would look better.


I'm fine either way.

In case nobody objects I'll change it.


Juergen


OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list

2020-11-11 Thread Jan Beulich
On 09.11.2020 13:50, Oleksandr Andrushchenko wrote:
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -879,6 +879,43 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>  return ret;
>  }
>  
> +#ifdef CONFIG_ARM
> +int pci_device_set_assigned(u16 seg, u8 bus, u8 devfn, bool assigned)
> +{
> +struct pci_dev *pdev;
> +
> +pdev = pci_get_pdev(seg, bus, devfn);
> +if ( !pdev )
> +{
> +printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
> +   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +return -ENODEV;
> +}
> +
> +pdev->assigned = assigned;
> +printk(XENLOG_ERR "pciback %sassign PCI device %04x:%02x:%02x.%u\n",
> +   assigned ? "" : "de-",
> +   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +
> +return 0;
> +}
> +
> +int pci_device_get_assigned(u16 seg, u8 bus, u8 devfn)
> +{
> +struct pci_dev *pdev;
> +
> +pdev = pci_get_pdev(seg, bus, devfn);
> +if ( !pdev )
> +{
> +printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
> +   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +return -ENODEV;
> +}
> +
> +return pdev->assigned ? 0 : -ENODEV;
> +}
> +#endif
> +
>  #ifndef CONFIG_ARM
>  /*TODO :Implement MSI support for ARM  */
>  static int pci_clean_dpci_irq(struct domain *d,
> @@ -1821,6 +1858,62 @@ int iommu_do_pci_domctl(
>  return ret;
>  }
>  
> +#ifdef CONFIG_ARM
> +struct list_assigned {
> +uint32_t cur_idx;
> +uint32_t from_idx;
> +bool assigned;
> +domid_t *domain;
> +uint32_t *machine_sbdf;
> +};
> +
> +static int _enum_assigned_pci_devices(struct pci_seg *pseg, void *arg)
> +{
> +struct list_assigned *ctxt = arg;
> +struct pci_dev *pdev;
> +
> +list_for_each_entry ( pdev, >alldevs_list, alldevs_list )
> +{
> +if ( pdev->assigned == ctxt->assigned )
> +{
> +if ( ctxt->cur_idx == ctxt->from_idx )
> +{
> +*ctxt->domain = pdev->domain->domain_id;
> +*ctxt->machine_sbdf = pdev->sbdf.sbdf;
> +return 1;
> +}
> +ctxt->cur_idx++;
> +}
> +}
> +return 0;
> +}
> +
> +int pci_device_enum_assigned(bool report_not_assigned,
> + uint32_t from_idx, domid_t *domain,
> + uint32_t *machine_sbdf)
> +{
> +struct list_assigned ctxt = {
> +.assigned = !report_not_assigned,
> +.cur_idx = 0,
> +.from_idx = from_idx,
> +.domain = domain,
> +.machine_sbdf = machine_sbdf,
> +};
> +int ret;
> +
> +pcidevs_lock();
> +ret = pci_segments_iterate(_enum_assigned_pci_devices, );
> +pcidevs_unlock();
> +/*
> + * If not found then report as EINVAL to mark
> + * enumeration process finished.
> + */
> +if ( !ret )
> +return -EINVAL;
> +return 0;
> +}
> +#endif

Just in case the earlier comments you've got don't lead to removal
of this code - unless there's a real need for them to be put here,
under #ifdef, please add a new xen/drivers/passthrough/arm/pci.c
instead. Even if for just part of the code, this would then also
help with more clear maintainership of this Arm specific code.

Jan



Re: [PATCH 10/12] xen/hypfs: add cpupool directories

2020-11-11 Thread Dario Faggioli
On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Add /cpupool/ directories to hypfs. Those are completely
> dynamic, so the related hypfs access functions need to be
> implemented.
> 
> Signed-off-by: Juergen Gross 
>
So, I'm almost sold... Just one comment:

> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
>  
>  cpupool_gran_init();
>  
> +#ifdef CONFIG_HYPFS
> +    hypfs_add_dir(_root, _dir, true);
> +#endif
> +
What would you think about doing this in an helper function
(hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an
empty stub if !CONFIG_HYPFS ?

That will save us from having the #ifdef-s again here.

I'm asking because it's certainly not critical and I don't have a too
strong opinion about it. But I do think the code would look better.

>  cpupool0 = cpupool_create(0, 0, );
>  BUG_ON(cpupool0 == NULL);
>  cpupool_put(cpupool0);

-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


[xen-4.10-testing test] 156633: tolerable FAIL - PUSHED

2020-11-11 Thread osstest service owner
flight 156633 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156633/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate   fail  like 156396
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail like 156396
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail like 156396
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 156396
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 156396
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 156396
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 156396
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 156396
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 156396
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 156396
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 156396
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 156396
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 156396
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156396
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass

version targeted for testing:
 xen  15b298097289f1c11b981454a3dc912b95e2f65b
baseline version:
 xen  7a4ec792d12d58d14e4d0a9cb569be4fd4fe9cf5

Last test of basis   156396  2020-11-04 09:05:41 Z7 days
Testing same since   156633  2020-11-10 18:05:53 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Julien Grall 
  Roger Pau Monné 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm  

Re: [PATCH 01/12] xen/cpupool: add cpu to sched_res_mask when removing it from cpupool

2020-11-11 Thread Jürgen Groß

On 11.11.20 15:32, Dario Faggioli wrote:

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:

When a cpu is removed from a cpupool and added to the free cpus it
should be added to sched_res_mask, too.

The related removal from sched_res_mask in case of core scheduling
is already done in schedule_cpu_add().

As long as all cpupools share the same scheduling granularity there
is nothing going wrong with the missing removal,


This patch is adding an addition of the CPU to sched_res_mask, which
was missing... So isn't the above "there is nothing going wrong with
the missing addition", or something like that?


Oh yes, of course.

Will fix that.



Or, if it's an actual missing removal that we are referring to here,
then it must be clarified which one.


but this will change
when per-cpupool granularity is fully supported.

Signed-off-by: Juergen Gross 


With the above fixed or clarified:

Reviewed-by: Dario Faggioli 


Thanks,


Juergen


OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH 04/10] [WORKAROUND] xen/arm: Update hwdom's p2m to trap ECAM space

2020-11-11 Thread Roger Pau Monné
On Mon, Nov 09, 2020 at 02:50:25PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
> 
> Host bridge controller's ECAM space is mapped into Domain-0's p2m,
> thus it is not possible to trap the same for vPCI via MMIO handlers.
> For this to work we need to unmap those mappings in p2m.
> 
> TODO (Julien): It would be best if we avoid the map/unmap operation.
> So, maybe we want to introduce another way to avoid the mapping.
> Maybe by changing the type of the controller to "PCI_HOSTCONTROLLER"
> and checking if this is a PCI hostcontroller avoid the mapping.

I know very little about Arm to be able to provide meaningful comments
here. I agree that creating the maps just to remove them afterwards is
not the right approach, we should instead avoid those mappings from
being created in the first place.

Roger.



Re: [PATCH 03/10] xen/arm: Setup MMIO range trap handlers for hardware domain

2020-11-11 Thread Oleksandr Andrushchenko

On 11/11/20 4:39 PM, Roger Pau Monné wrote:
> On Mon, Nov 09, 2020 at 02:50:24PM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko 
>>
>> In order vPCI to work it needs all access to PCI configuration space
>> access to be synchronized among all entities, e.g. hardware domain and
>> guests. For that implement PCI host bridge specific callbacks to
>> propelry setup those ranges depending on host bridge implementation.
>>
>> This callback is optional and may not be used by non-ECAM host bridges.
>>
>> Signed-off-by: Oleksandr Andrushchenko 
>> ---
>>   xen/arch/arm/pci/pci-host-common.c  | 16 
>>   xen/arch/arm/pci/pci-host-generic.c | 15 +--
>>   xen/arch/arm/vpci.c | 16 +++-
> So this is based on top of another series, maybe it would make sense
> to post those together, or else it's hard to get the right context.

This is based on ARM's PCI passthrough RFC series [1]

You can also see the whole picture at [2]

>
>>   xen/include/asm-arm/pci.h   |  7 +++
>>   4 files changed, 51 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/pci/pci-host-common.c 
>> b/xen/arch/arm/pci/pci-host-common.c
>> index b011c7eff3c8..b81184d34980 100644
>> --- a/xen/arch/arm/pci/pci-host-common.c
>> +++ b/xen/arch/arm/pci/pci-host-common.c
>> @@ -219,6 +219,22 @@ struct device *pci_find_host_bridge_device(struct 
>> device *dev)
>>   }
>>   return dt_to_dev(bridge->dt_node);
>>   }
>> +
>> +int pci_host_iterate_bridges(struct domain *d,
>> + int (*clb)(struct domain *d,
>> +struct pci_host_bridge *bridge))
>> +{
>> +struct pci_host_bridge *bridge;
>> +int err;
>> +
>> +list_for_each_entry( bridge, _host_bridges, node )
>> +{
>> +err = clb(d, bridge);
>> +if ( err )
>> +return err;
>> +}
>> +return 0;
>> +}
>>   /*
>>* Local variables:
>>* mode: C
>> diff --git a/xen/arch/arm/pci/pci-host-generic.c 
>> b/xen/arch/arm/pci/pci-host-generic.c
>> index 54dd123e95c7..469df3da0116 100644
>> --- a/xen/arch/arm/pci/pci-host-generic.c
>> +++ b/xen/arch/arm/pci/pci-host-generic.c
>> @@ -85,12 +85,23 @@ int pci_ecam_config_read(struct pci_host_bridge *bridge, 
>> uint32_t sbdf,
>>   return 0;
>>   }
>>   
>> +static int pci_ecam_register_mmio_handler(struct domain *d,
>> +  struct pci_host_bridge *bridge,
> I think you can also constify bridge here.
Makes sense
>
> Thanks, Roger.

Thank you,

Oleksandr

[1] https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg84452.html

[2] https://github.com/andr2000/xen/tree/vpci_rfc


Re: [PATCH 04/12] xen/sched: sort included headers in cpupool.c

2020-11-11 Thread Dario Faggioli
On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Common style is to include header files in alphabetical order. Sort
> the
> #include statements in cpupool.c accordingly.
> 
> Signed-off-by: Juergen Gross 
>
Acked-by: Dario Faggioli 

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list

2020-11-11 Thread Oleksandr Andrushchenko
On 11/11/20 3:53 PM, Roger Pau Monné wrote:
> On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko 
>>
>> The original code depends on pciback to manage assignable device list.
>> The functionality which is implemented by the pciback and the toolstack
>> and which is relevant/missing/needed for ARM:
>>
>> 1. pciback is used as a database for assignable PCI devices, e.g. xl
>> pci-assignable-{add|remove|list} manipulates that list. So, whenever the
>> toolstack needs to know which PCI devices can be passed through it reads
>> that from the relevant sysfs entries of the pciback.
>>
>> 2. pciback is used to hold the unbound PCI devices, e.g. when passing through
>> a PCI device it needs to be unbound from the relevant device driver and 
>> bound
>> to pciback (strictly speaking it is not required that the device is 
>> bound to
>> pciback, but pciback is again used as a database of the passed through 
>> PCI
>> devices, so we can re-bind the devices back to their original drivers 
>> when
>> guest domain shuts down)
>>
>> 1. As ARM doesn't use pciback implement the above with additional sysctls:
>>   - XEN_SYSCTL_pci_device_set_assigned
> I don't see the point in having this sysfs, Xen already knows when a
> device is assigned because the XEN_DOMCTL_assign_device hypercall is
> used.

But how does the toolstack know about that? When the toolstack needs to

list/know all assigned devices it queries pciback's sysfs entries. So, with

XEN_DOMCTL_assign_device we make that knowledge available to Xen,

but there are no means for the toolstack to get it back.

>
>>   - XEN_SYSCTL_pci_device_get_assigned
>>   - XEN_SYSCTL_pci_device_enum_assigned
>> 2. Extend struct pci_dev to hold assignment state.
> I'm not really found of this, the hypervisor is no place to store a
> database like this, unless it's strictly needed.
I do agree and it was previously discussed a bit
>
> IMO the right implementation here would be to split Linux pciback into
> two different drivers:
>
>   - The pv-pci backend for doing passthrough to classic PV guests.
Ok
>   - The rest of pciback: device reset, hand-holding driver for devices
> to be assigned and database.

These and assigned devices list seem to be the complete set which

is needed by the toolstack on ARM. All other functionality provided by

pciback is not needed for ARM.

Jan was saying [1] that we might still use pciback as is, but simply use only

the functionality we need.

>
> I think there must be something similar in KVM that performs the tasks
> of my last point, maybe we could piggyback on it?
I promised to look at it. I owe this
>
> If we want to go the route proposed by this patch, ie: Xen performing
> the functions of pciback you would also have to move the PCI reset
> code to Xen, so that you can fully manage the PCI devices from Xen.
In case of dom0less this would be the case: no pciback, no Domain-0
>
>> Signed-off-by: Oleksandr Andrushchenko 
>> ---
>>   tools/libxc/include/xenctrl.h |   9 +++
>>   tools/libxc/xc_domain.c   |   1 +
>>   tools/libxc/xc_misc.c |  46 +++
>>   tools/libxl/Makefile  |   4 ++
>>   tools/libxl/libxl_pci.c   | 105 --
>>   xen/arch/arm/sysctl.c |  66 -
>>   xen/drivers/passthrough/pci.c |  93 ++
>>   xen/include/public/sysctl.h   |  40 +
>>   xen/include/xen/pci.h |  12 
>>   9 files changed, 370 insertions(+), 6 deletions(-)
> I've done some light review below given my questions above.

This is more than I expected for an RFC series

Thank you!

>
>> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
>> index 4c89b7294c4f..77029013da7d 100644
>> --- a/tools/libxc/include/xenctrl.h
>> +++ b/tools/libxc/include/xenctrl.h
>> @@ -2652,6 +2652,15 @@ int xc_livepatch_replace(xc_interface *xch, char 
>> *name, uint32_t timeout, uint32
>>   int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
>>xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
>>   
>> +typedef xen_sysctl_pci_device_enum_assigned_t xc_pci_device_enum_assigned_t;
>> +
>> +int xc_pci_device_set_assigned(xc_interface *xch, uint32_t machine_sbdf,
>> +   bool assigned);
>> +int xc_pci_device_get_assigned(xc_interface *xch, uint32_t machine_sbdf);
>> +
>> +int xc_pci_device_enum_assigned(xc_interface *xch,
>> +xc_pci_device_enum_assigned_t *e);
>> +
>>   /* Compat shims */
>>   #include "xenctrl_compat.h"
>>   
>> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
>> index 71829c2bce3e..d515191e9243 100644
>> --- a/tools/libxc/xc_domain.c
>> +++ b/tools/libxc/xc_domain.c
>> @@ -2321,6 +2321,7 @@ int xc_domain_soft_reset(xc_interface *xch,
>>   domctl.domain = domid;
>>   return do_domctl(xch, );
>>   }
>> +
>>   /*
>>* 

Re: [PATCH 03/10] xen/arm: Setup MMIO range trap handlers for hardware domain

2020-11-11 Thread Roger Pau Monné
On Mon, Nov 09, 2020 at 02:50:24PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
> 
> In order vPCI to work it needs all access to PCI configuration space
> access to be synchronized among all entities, e.g. hardware domain and
> guests. For that implement PCI host bridge specific callbacks to
> propelry setup those ranges depending on host bridge implementation.
> 
> This callback is optional and may not be used by non-ECAM host bridges.
> 
> Signed-off-by: Oleksandr Andrushchenko 
> ---
>  xen/arch/arm/pci/pci-host-common.c  | 16 
>  xen/arch/arm/pci/pci-host-generic.c | 15 +--
>  xen/arch/arm/vpci.c | 16 +++-

So this is based on top of another series, maybe it would make sense
to post those together, or else it's hard to get the right context.

>  xen/include/asm-arm/pci.h   |  7 +++
>  4 files changed, 51 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/pci/pci-host-common.c 
> b/xen/arch/arm/pci/pci-host-common.c
> index b011c7eff3c8..b81184d34980 100644
> --- a/xen/arch/arm/pci/pci-host-common.c
> +++ b/xen/arch/arm/pci/pci-host-common.c
> @@ -219,6 +219,22 @@ struct device *pci_find_host_bridge_device(struct device 
> *dev)
>  }
>  return dt_to_dev(bridge->dt_node);
>  }
> +
> +int pci_host_iterate_bridges(struct domain *d,
> + int (*clb)(struct domain *d,
> +struct pci_host_bridge *bridge))
> +{
> +struct pci_host_bridge *bridge;
> +int err;
> +
> +list_for_each_entry( bridge, _host_bridges, node )
> +{
> +err = clb(d, bridge);
> +if ( err )
> +return err;
> +}
> +return 0;
> +}
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/arm/pci/pci-host-generic.c 
> b/xen/arch/arm/pci/pci-host-generic.c
> index 54dd123e95c7..469df3da0116 100644
> --- a/xen/arch/arm/pci/pci-host-generic.c
> +++ b/xen/arch/arm/pci/pci-host-generic.c
> @@ -85,12 +85,23 @@ int pci_ecam_config_read(struct pci_host_bridge *bridge, 
> uint32_t sbdf,
>  return 0;
>  }
>  
> +static int pci_ecam_register_mmio_handler(struct domain *d,
> +  struct pci_host_bridge *bridge,

I think you can also constify bridge here.

Thanks, Roger.



Re: [PATCH 02/12] xen/cpupool: add missing bits for per-cpupool scheduling granularity

2020-11-11 Thread Dario Faggioli
On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Even with storing the scheduling granularity in struct cpupool there
> are still a few bits missing for being able to have cpupools with
> different granularity (apart from the missing interface for setting
> the individual granularities): the number of cpus in a scheduling
> unit is always taken from the global sched_granularity variable.
> 
> So store the value in struct cpupool and use that instead of
> sched_granularity.
> 
> Signed-off-by: Juergen Gross 
>
Reviewed-by: Dario Faggioli 

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH 01/12] xen/cpupool: add cpu to sched_res_mask when removing it from cpupool

2020-11-11 Thread Dario Faggioli
On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> When a cpu is removed from a cpupool and added to the free cpus it
> should be added to sched_res_mask, too.
> 
> The related removal from sched_res_mask in case of core scheduling
> is already done in schedule_cpu_add().
> 
> As long as all cpupools share the same scheduling granularity there
> is nothing going wrong with the missing removal, 
>
This patch is adding an addition of the CPU to sched_res_mask, which
was missing... So isn't the above "there is nothing going wrong with
the missing addition", or something like that?

Or, if it's an actual missing removal that we are referring to here,
then it must be clarified which one.

> but this will change
> when per-cpupool granularity is fully supported.
> 
> Signed-off-by: Juergen Gross 
>
With the above fixed or clarified:

Reviewed-by: Dario Faggioli 

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH 2/2] drm/mediatek: Use struct dma_buf_map in GEM vmap ops

2020-11-11 Thread Chun-Kuang Hu
Hi, Thomas:

Thomas Zimmermann  於 2020年11月9日 週一 下午6:32寫道:
>
> Fixes a build failure with mediatek.
>
> This change was supposed to be part of commit 49a3f51dfeee ("drm/gem:
> Use struct dma_buf_map in GEM vmap ops and convert GEM backends"), but
> mediatek was forgotten.

Acked-by: Chun-Kuang Hu 

>
> Signed-off-by: Thomas Zimmermann 
> Fixes: 49a3f51dfeee ("drm/gem: Use struct dma_buf_map in GEM vmap ops and 
> convert GEM backends")
> Cc: Thomas Zimmermann 
> Cc: Christian König 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Maarten Lankhorst 
> Cc: Maxime Ripard 
> Cc: Dave Airlie 
> Cc: Lucas Stach 
> Cc: Russell King 
> Cc: Christian Gmeiner 
> Cc: Qiang Yu 
> Cc: Ben Skeggs 
> Cc: Rob Herring 
> Cc: Tomeu Vizoso 
> Cc: Steven Price 
> Cc: Alyssa Rosenzweig 
> Cc: Gerd Hoffmann 
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: Sandy Huang 
> Cc: "Heiko Stübner" 
> Cc: Hans de Goede 
> Cc: Sean Paul 
> Cc: Eric Anholt 
> Cc: Rodrigo Siqueira 
> Cc: Melissa Wen 
> Cc: Haneen Mohammed 
> Cc: Oleksandr Andrushchenko 
> Cc: Sumit Semwal 
> Cc: Emil Velikov 
> Cc: Marek Szyprowski 
> Cc: Arunpravin 
> Cc: Huang Rui 
> Cc: Luben Tuikov 
> Cc: Madhav Chauhan 
> Cc: Nirmoy Das 
> Cc: Jason Gunthorpe 
> Cc: Sam Ravnborg 
> Cc: Chris Wilson 
> Cc: dri-de...@lists.freedesktop.org
> Cc: etna...@lists.freedesktop.org
> Cc: l...@lists.freedesktop.org
> Cc: nouv...@lists.freedesktop.org
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: spice-de...@lists.freedesktop.org
> Cc: amd-...@lists.freedesktop.org
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-rockc...@lists.infradead.org
> Cc: xen-devel@lists.xenproject.org
> ---
>  drivers/gpu/drm/mediatek/mtk_drm_gem.c | 20 
>  drivers/gpu/drm/mediatek/mtk_drm_gem.h |  4 ++--
>  2 files changed, 14 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c 
> b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> index cdd1a6e61564..28a2ee1336ef 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> @@ -240,23 +240,25 @@ struct drm_gem_object 
> *mtk_gem_prime_import_sg_table(struct drm_device *dev,
> return _gem->base;
>  }
>
> -void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
> +int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map 
> *map)
>  {
> struct mtk_drm_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
> -   struct sg_table *sgt;
> +   struct sg_table *sgt = NULL;
> unsigned int npages;
>
> if (mtk_gem->kvaddr)
> -   return mtk_gem->kvaddr;
> +   goto out;
>
> sgt = mtk_gem_prime_get_sg_table(obj);
> if (IS_ERR(sgt))
> -   return NULL;
> +   return PTR_ERR(sgt);
>
> npages = obj->size >> PAGE_SHIFT;
> mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL);
> -   if (!mtk_gem->pages)
> -   goto out;
> +   if (!mtk_gem->pages) {
> +   kfree(sgt);
> +   return -ENOMEM;
> +   }
>
> drm_prime_sg_to_page_addr_arrays(sgt, mtk_gem->pages, NULL, npages);
>
> @@ -265,13 +267,15 @@ void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
>
>  out:
> kfree(sgt);
> +   dma_buf_map_set_vaddr(map, mtk_gem->kvaddr);
>
> -   return mtk_gem->kvaddr;
> +   return 0;
>  }
>
> -void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map 
> *map)
>  {
> struct mtk_drm_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
> +   void *vaddr = map->vaddr;
>
> if (!mtk_gem->pages)
> return;
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h 
> b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
> index ff9f976d9807..6da5ccb4b933 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
> @@ -45,7 +45,7 @@ int mtk_drm_gem_mmap_buf(struct drm_gem_object *obj,
>  struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj);
>  struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach, struct sg_table 
> *sg);
> -void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map 
> *map);
> +void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map 
> *map);
>
>  #endif
> --
> 2.29.2
>



Re: [PATCH v3 5/7] x86: guard against straight-line speculation past RET

2020-11-11 Thread Jan Beulich
On 11.11.2020 15:19, Roger Pau Monné wrote:
> On Wed, Nov 11, 2020 at 02:33:34PM +0100, Jan Beulich wrote:
>> On 11.11.2020 12:15, Roger Pau Monné wrote:
>>> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
 Under certain conditions CPUs can speculate into the instruction stream
 past a RET instruction. Guard against this just like 3b7dab93f240
 ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
 did - by inserting an "INT $3" insn. It's merely the mechanics of how to
 achieve this that differ: A set of macros gets introduced to post-
 process RET insns issued by the compiler (or living in assembly files).

 Unfortunately for clang this requires further features their built-in
 assembler doesn't support: We need to be able to override insn mnemonics
 produced by the compiler (which may be impossible, if internally
 assembly mnemonics never get generated), and we want to use \(text)
 escaping / quoting in the auxiliary macro.

 Signed-off-by: Jan Beulich 
 Acked-by: Roger Pau Monné 
 ---
 TBD: Would be nice to avoid the additions in .init.text, but a query to
  the binutils folks regarding the ability to identify the section
  stuff is in (by Peter Zijlstra over a year ago:
  https://sourceware.org/pipermail/binutils/2019-July/107528.html)
  has been left without helpful replies.
 ---
 v3: Use .byte 0xc[23] instead of the nested macros.
 v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.

 --- a/xen/Makefile
 +++ b/xen/Makefile
 @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
  # https://bugs.llvm.org/show_bug.cgi?id=36110
  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile 
 $(open)".macro FOO;.endm",-no-integrated-as)
  
 -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
 +# Check whether \(text) escaping in macro bodies is supported.
 +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 
 8",,-no-integrated-as)
 +
 +# Check whether macros can override insn mnemonics in inline assembly.
 +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; 
 .error; .endm",-no-integrated-as)
>>>
>>> I was going over this to post a bug report to LLVM, but it seems like
>>> gcc also doesn't overwrite ret when using the above snippet:
>>>
>>> https://godbolt.org/z/oqsPTv
>>
>> I can't see what's different from
>>
>> void test(void) {
>>  asm volatile (".macro ret; .error; .endm; .macro retq; .error; .endm");
>> }
>>
>> but this one produces "Error: .error directive invoked in source file"
>> for me with both old and new gcc.
> 
> You are right, I think godbolt is somehow busted?

Or maybe they really only compile to assembly, while the error results
from the assembler?

Jan



Re: [PATCH v3 5/7] x86: guard against straight-line speculation past RET

2020-11-11 Thread Roger Pau Monné
On Wed, Nov 11, 2020 at 02:33:34PM +0100, Jan Beulich wrote:
> On 11.11.2020 12:15, Roger Pau Monné wrote:
> > On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> >> Under certain conditions CPUs can speculate into the instruction stream
> >> past a RET instruction. Guard against this just like 3b7dab93f240
> >> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> >> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> >> achieve this that differ: A set of macros gets introduced to post-
> >> process RET insns issued by the compiler (or living in assembly files).
> >>
> >> Unfortunately for clang this requires further features their built-in
> >> assembler doesn't support: We need to be able to override insn mnemonics
> >> produced by the compiler (which may be impossible, if internally
> >> assembly mnemonics never get generated), and we want to use \(text)
> >> escaping / quoting in the auxiliary macro.
> >>
> >> Signed-off-by: Jan Beulich 
> >> Acked-by: Roger Pau Monné 
> >> ---
> >> TBD: Would be nice to avoid the additions in .init.text, but a query to
> >>  the binutils folks regarding the ability to identify the section
> >>  stuff is in (by Peter Zijlstra over a year ago:
> >>  https://sourceware.org/pipermail/binutils/2019-July/107528.html)
> >>  has been left without helpful replies.
> >> ---
> >> v3: Use .byte 0xc[23] instead of the nested macros.
> >> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
> >>
> >> --- a/xen/Makefile
> >> +++ b/xen/Makefile
> >> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
> >>  # https://bugs.llvm.org/show_bug.cgi?id=36110
> >>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile 
> >> $(open)".macro FOO;.endm",-no-integrated-as)
> >>  
> >> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
> >> +# Check whether \(text) escaping in macro bodies is supported.
> >> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 
> >> 8",,-no-integrated-as)
> >> +
> >> +# Check whether macros can override insn mnemonics in inline assembly.
> >> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; 
> >> .error; .endm",-no-integrated-as)
> > 
> > I was going over this to post a bug report to LLVM, but it seems like
> > gcc also doesn't overwrite ret when using the above snippet:
> > 
> > https://godbolt.org/z/oqsPTv
> 
> I can't see what's different from
> 
> void test(void) {
>   asm volatile (".macro ret; .error; .endm; .macro retq; .error; .endm");
> }
> 
> but this one produces "Error: .error directive invoked in source file"
> for me with both old and new gcc.

You are right, I think godbolt is somehow busted?

I can reproduce your results with my version of gcc, so will just
report to LLVM.

Roger.



Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code run with PVH domains on ARM

2020-11-11 Thread Roger Pau Monné
On Wed, Nov 11, 2020 at 02:12:56PM +, Oleksandr Andrushchenko wrote:
> 
> On 11/11/20 3:55 PM, Roger Pau Monné wrote:
> > On Wed, Nov 11, 2020 at 01:10:01PM +, Oleksandr Andrushchenko wrote:
> >> On 11/11/20 2:31 PM, Roger Pau Monné wrote:
> >>> On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
>  From: Oleksandr Andrushchenko 
> 
>  According to 
>  https://urldefense.com/v3/__https://wiki.xenproject.org/wiki/Linux_PVH__;!!GF_29dbcQIUBPA!nEHd6eivmqtdJxtrhO-3x2Mz9F50JsKUoV7WTEJd_D1N01DrBOJXzGW1QAqwshZ9AMxywbUhOA$
>   [wiki[.]xenproject[.]org]:
> 
>  Items not supported by PVH
> - PCI pass through (as of Xen 4.10)
> 
>  Allow running PCI remove code on ARM and do not assert for PVH domains.
> 
>  Signed-off-by: Oleksandr Andrushchenko 
>  ---
> tools/libxl/Makefile| 4 
> tools/libxl/libxl_pci.c | 4 +++-
> 2 files changed, 7 insertions(+), 1 deletion(-)
> 
>  diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
>  index 241da7fff6f4..f3806aafcb4e 100644
>  --- a/tools/libxl/Makefile
>  +++ b/tools/libxl/Makefile
>  @@ -130,6 +130,10 @@ endif
> 
> LIBXL_LIBS += -lyajl
> 
>  +ifeq ($(CONFIG_ARM),y)
>  +CFALGS += -DCONFIG_ARM
>  +endif
>  +
> LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o 
>  libxl_pci.o \
>   libxl_dom.o libxl_exec.o libxl_xshelp.o 
>  libxl_device.o \
>   libxl_internal.o libxl_utils.o libxl_uuid.o \
>  diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
>  index bc5843b13701..b93cf976642b 100644
>  --- a/tools/libxl/libxl_pci.c
>  +++ b/tools/libxl/libxl_pci.c
>  @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, 
>  uint32_t domid,
> goto out_fail;
> }
> } else {
>  +/* PCI passthrough can also run on ARM PVH */
>  +#ifndef CONFIG_ARM
> assert(type == LIBXL_DOMAIN_TYPE_PV);
>  -
>  +#endif
> >>> I would just remove the assert now if this is to be used by Arm and
> >>> you don't need to fork the file for Arm.
> >> Sounds good, I will drop then
> >>
> >> But what would be the right explanation then? I mean why there was an 
> >> ASSERT
> >>
> >> and now it is safe (for x86) to remove that?
> > An assert is just a safe belt, the expectation is that it's never hit
> > by actual code. Given that this path will now also be used by PVH
> > (even if only on Arm) I don't see the point in keeping the assert, and
> > making it conditional to != Arm seems worse than just dropping it.
> 
> Ok, so I can write in the patch description something like:
> 
> "this path is now used by PVH, so the assert is no longer valid"
> 
> Does it sound ok?

LGTM.

Roger.



Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code run with PVH domains on ARM

2020-11-11 Thread Oleksandr Andrushchenko

On 11/11/20 3:55 PM, Roger Pau Monné wrote:
> On Wed, Nov 11, 2020 at 01:10:01PM +, Oleksandr Andrushchenko wrote:
>> On 11/11/20 2:31 PM, Roger Pau Monné wrote:
>>> On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
 From: Oleksandr Andrushchenko 

 According to 
 https://urldefense.com/v3/__https://wiki.xenproject.org/wiki/Linux_PVH__;!!GF_29dbcQIUBPA!nEHd6eivmqtdJxtrhO-3x2Mz9F50JsKUoV7WTEJd_D1N01DrBOJXzGW1QAqwshZ9AMxywbUhOA$
  [wiki[.]xenproject[.]org]:

 Items not supported by PVH
- PCI pass through (as of Xen 4.10)

 Allow running PCI remove code on ARM and do not assert for PVH domains.

 Signed-off-by: Oleksandr Andrushchenko 
 ---
tools/libxl/Makefile| 4 
tools/libxl/libxl_pci.c | 4 +++-
2 files changed, 7 insertions(+), 1 deletion(-)

 diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
 index 241da7fff6f4..f3806aafcb4e 100644
 --- a/tools/libxl/Makefile
 +++ b/tools/libxl/Makefile
 @@ -130,6 +130,10 @@ endif

LIBXL_LIBS += -lyajl

 +ifeq ($(CONFIG_ARM),y)
 +CFALGS += -DCONFIG_ARM
 +endif
 +
LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
libxl_dom.o libxl_exec.o libxl_xshelp.o 
 libxl_device.o \
libxl_internal.o libxl_utils.o libxl_uuid.o \
 diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
 index bc5843b13701..b93cf976642b 100644
 --- a/tools/libxl/libxl_pci.c
 +++ b/tools/libxl/libxl_pci.c
 @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t 
 domid,
goto out_fail;
}
} else {
 +/* PCI passthrough can also run on ARM PVH */
 +#ifndef CONFIG_ARM
assert(type == LIBXL_DOMAIN_TYPE_PV);
 -
 +#endif
>>> I would just remove the assert now if this is to be used by Arm and
>>> you don't need to fork the file for Arm.
>> Sounds good, I will drop then
>>
>> But what would be the right explanation then? I mean why there was an ASSERT
>>
>> and now it is safe (for x86) to remove that?
> An assert is just a safe belt, the expectation is that it's never hit
> by actual code. Given that this path will now also be used by PVH
> (even if only on Arm) I don't see the point in keeping the assert, and
> making it conditional to != Arm seems worse than just dropping it.

Ok, so I can write in the patch description something like:

"this path is now used by PVH, so the assert is no longer valid"

Does it sound ok?

> Thanks, Roger.

Thank you,

Oleksandr


Re: [PATCH 20/24] dm-raid: use set_capacity_and_notify

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig 
---
  drivers/md/dm-raid.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



Re: [PATCH 15/24] nvme: use set_capacity_and_notify in nvme_set_queue_dying

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

Use the block layer helper to update both the disk and block device
sizes.  Contrary to the name no notification is sent in this case,
as a size 0 is special cased.

Signed-off-by: Christoph Hellwig 
---
  drivers/nvme/host/core.c | 13 +
  1 file changed, 1 insertion(+), 12 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code run with PVH domains on ARM

2020-11-11 Thread Roger Pau Monné
On Wed, Nov 11, 2020 at 01:10:01PM +, Oleksandr Andrushchenko wrote:
> 
> On 11/11/20 2:31 PM, Roger Pau Monné wrote:
> > On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko 
> >>
> >> According to https://wiki.xenproject.org/wiki/Linux_PVH:
> >>
> >> Items not supported by PVH
> >>   - PCI pass through (as of Xen 4.10)
> >>
> >> Allow running PCI remove code on ARM and do not assert for PVH domains.
> >>
> >> Signed-off-by: Oleksandr Andrushchenko 
> >> ---
> >>   tools/libxl/Makefile| 4 
> >>   tools/libxl/libxl_pci.c | 4 +++-
> >>   2 files changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> >> index 241da7fff6f4..f3806aafcb4e 100644
> >> --- a/tools/libxl/Makefile
> >> +++ b/tools/libxl/Makefile
> >> @@ -130,6 +130,10 @@ endif
> >>   
> >>   LIBXL_LIBS += -lyajl
> >>   
> >> +ifeq ($(CONFIG_ARM),y)
> >> +CFALGS += -DCONFIG_ARM
> >> +endif
> >> +
> >>   LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
> >>libxl_dom.o libxl_exec.o libxl_xshelp.o libxl_device.o \
> >>libxl_internal.o libxl_utils.o libxl_uuid.o \
> >> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> >> index bc5843b13701..b93cf976642b 100644
> >> --- a/tools/libxl/libxl_pci.c
> >> +++ b/tools/libxl/libxl_pci.c
> >> @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t 
> >> domid,
> >>   goto out_fail;
> >>   }
> >>   } else {
> >> +/* PCI passthrough can also run on ARM PVH */
> >> +#ifndef CONFIG_ARM
> >>   assert(type == LIBXL_DOMAIN_TYPE_PV);
> >> -
> >> +#endif
> > I would just remove the assert now if this is to be used by Arm and
> > you don't need to fork the file for Arm.
> 
> Sounds good, I will drop then
> 
> But what would be the right explanation then? I mean why there was an ASSERT
> 
> and now it is safe (for x86) to remove that?

An assert is just a safe belt, the expectation is that it's never hit
by actual code. Given that this path will now also be used by PVH
(even if only on Arm) I don't see the point in keeping the assert, and
making it conditional to != Arm seems worse than just dropping it.

Thanks, Roger.



Re: [PATCH 13/24] dm: use set_capacity_and_notify

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig 
---
  drivers/md/dm.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list

2020-11-11 Thread Roger Pau Monné
On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
> 
> The original code depends on pciback to manage assignable device list.
> The functionality which is implemented by the pciback and the toolstack
> and which is relevant/missing/needed for ARM:
> 
> 1. pciback is used as a database for assignable PCI devices, e.g. xl
>pci-assignable-{add|remove|list} manipulates that list. So, whenever the
>toolstack needs to know which PCI devices can be passed through it reads
>that from the relevant sysfs entries of the pciback.
> 
> 2. pciback is used to hold the unbound PCI devices, e.g. when passing through
>a PCI device it needs to be unbound from the relevant device driver and 
> bound
>to pciback (strictly speaking it is not required that the device is bound 
> to
>pciback, but pciback is again used as a database of the passed through PCI
>devices, so we can re-bind the devices back to their original drivers when
>guest domain shuts down)
> 
> 1. As ARM doesn't use pciback implement the above with additional sysctls:
>  - XEN_SYSCTL_pci_device_set_assigned

I don't see the point in having this sysfs, Xen already knows when a
device is assigned because the XEN_DOMCTL_assign_device hypercall is
used.

>  - XEN_SYSCTL_pci_device_get_assigned
>  - XEN_SYSCTL_pci_device_enum_assigned
> 2. Extend struct pci_dev to hold assignment state.

I'm not really found of this, the hypervisor is no place to store a
database like this, unless it's strictly needed.

IMO the right implementation here would be to split Linux pciback into
two different drivers:

 - The pv-pci backend for doing passthrough to classic PV guests.
 - The rest of pciback: device reset, hand-holding driver for devices
   to be assigned and database.

I think there must be something similar in KVM that performs the tasks
of my last point, maybe we could piggyback on it?

If we want to go the route proposed by this patch, ie: Xen performing
the functions of pciback you would also have to move the PCI reset
code to Xen, so that you can fully manage the PCI devices from Xen.

> 
> Signed-off-by: Oleksandr Andrushchenko 
> ---
>  tools/libxc/include/xenctrl.h |   9 +++
>  tools/libxc/xc_domain.c   |   1 +
>  tools/libxc/xc_misc.c |  46 +++
>  tools/libxl/Makefile  |   4 ++
>  tools/libxl/libxl_pci.c   | 105 --
>  xen/arch/arm/sysctl.c |  66 -
>  xen/drivers/passthrough/pci.c |  93 ++
>  xen/include/public/sysctl.h   |  40 +
>  xen/include/xen/pci.h |  12 
>  9 files changed, 370 insertions(+), 6 deletions(-)

I've done some light review below given my questions above.

> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 4c89b7294c4f..77029013da7d 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -2652,6 +2652,15 @@ int xc_livepatch_replace(xc_interface *xch, char 
> *name, uint32_t timeout, uint32
>  int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
>   xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
>  
> +typedef xen_sysctl_pci_device_enum_assigned_t xc_pci_device_enum_assigned_t;
> +
> +int xc_pci_device_set_assigned(xc_interface *xch, uint32_t machine_sbdf,
> +   bool assigned);
> +int xc_pci_device_get_assigned(xc_interface *xch, uint32_t machine_sbdf);
> +
> +int xc_pci_device_enum_assigned(xc_interface *xch,
> +xc_pci_device_enum_assigned_t *e);
> +
>  /* Compat shims */
>  #include "xenctrl_compat.h"
>  
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index 71829c2bce3e..d515191e9243 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -2321,6 +2321,7 @@ int xc_domain_soft_reset(xc_interface *xch,
>  domctl.domain = domid;
>  return do_domctl(xch, );
>  }
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
> index 3820394413a9..d439c4ba1019 100644
> --- a/tools/libxc/xc_misc.c
> +++ b/tools/libxc/xc_misc.c
> @@ -988,6 +988,52 @@ int xc_livepatch_replace(xc_interface *xch, char *name, 
> uint32_t timeout, uint32
>  return _xc_livepatch_action(xch, name, LIVEPATCH_ACTION_REPLACE, 
> timeout, flags);
>  }
>  
> +int xc_pci_device_set_assigned(
> +xc_interface *xch,
> +uint32_t machine_sbdf,
> +bool assigned)
> +{
> +DECLARE_SYSCTL;
> +
> +sysctl.cmd = XEN_SYSCTL_pci_device_set_assigned;
> +sysctl.u.pci_set_assigned.machine_sbdf = machine_sbdf;
> +sysctl.u.pci_set_assigned.assigned = assigned;
> +
> +return do_sysctl(xch, );
> +}
> +
> +int xc_pci_device_get_assigned(
> +xc_interface *xch,
> +uint32_t machine_sbdf)
> +{
> +DECLARE_SYSCTL;
> +
> +sysctl.cmd = XEN_SYSCTL_pci_device_get_assigned;
> +

Re: [PATCH v3 5/7] x86: guard against straight-line speculation past RET

2020-11-11 Thread Jan Beulich
On 11.11.2020 12:15, Roger Pau Monné wrote:
> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
>> Under certain conditions CPUs can speculate into the instruction stream
>> past a RET instruction. Guard against this just like 3b7dab93f240
>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
>> achieve this that differ: A set of macros gets introduced to post-
>> process RET insns issued by the compiler (or living in assembly files).
>>
>> Unfortunately for clang this requires further features their built-in
>> assembler doesn't support: We need to be able to override insn mnemonics
>> produced by the compiler (which may be impossible, if internally
>> assembly mnemonics never get generated), and we want to use \(text)
>> escaping / quoting in the auxiliary macro.
>>
>> Signed-off-by: Jan Beulich 
>> Acked-by: Roger Pau Monné 
>> ---
>> TBD: Would be nice to avoid the additions in .init.text, but a query to
>>  the binutils folks regarding the ability to identify the section
>>  stuff is in (by Peter Zijlstra over a year ago:
>>  https://sourceware.org/pipermail/binutils/2019-July/107528.html)
>>  has been left without helpful replies.
>> ---
>> v3: Use .byte 0xc[23] instead of the nested macros.
>> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
>>
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
>>  # https://bugs.llvm.org/show_bug.cgi?id=36110
>>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile 
>> $(open)".macro FOO;.endm",-no-integrated-as)
>>  
>> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
>> +# Check whether \(text) escaping in macro bodies is supported.
>> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 
>> 8",,-no-integrated-as)
>> +
>> +# Check whether macros can override insn mnemonics in inline assembly.
>> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; 
>> .endm",-no-integrated-as)
> 
> I was going over this to post a bug report to LLVM, but it seems like
> gcc also doesn't overwrite ret when using the above snippet:
> 
> https://godbolt.org/z/oqsPTv

I can't see what's different from

void test(void) {
asm volatile (".macro ret; .error; .endm; .macro retq; .error; .endm");
}

but this one produces "Error: .error directive invoked in source file"
for me with both old and new gcc.

Jan



Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()

2020-11-11 Thread Jan Beulich
On 11.11.2020 09:41, Oleksandr wrote:
> 
> On 11.11.20 10:08, Jan Beulich wrote:
> 
> Hi Jan
> 
>> On 10.11.2020 21:53, Oleksandr wrote:
>>> On 20.10.20 13:51, Paul Durrant wrote:
>>>
>>> Hi Paul.
>>>
>>> Sorry for the late response.
>>>
> -Original Message-
> From: Oleksandr Tyshchenko 
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko ; Stefano 
> Stabellini ;
> Julien Grall ; Volodymyr Babchuk 
> ; Andrew Cooper
> ; George Dunlap ; 
> Ian Jackson
> ; Jan Beulich ; Wei Liu 
> ; Paul Durrant
> ; Julien Grall 
> Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>
> From: Oleksandr Tyshchenko 
>
> This patch introduces a helper the main purpose of which is to check
> if a domain is using IOREQ server(s).
>
> On Arm the current benefit is to avoid calling handle_io_completion()
> (which implies iterating over all possible IOREQ servers anyway)
> on every return in leave_hypervisor_to_guest() if there is no active
> servers for the particular domain.
> Also this helper will be used by one of the subsequent patches on Arm.
>
> This involves adding an extra per-domain variable to store the count
> of servers in use.
>
> Signed-off-by: Oleksandr Tyshchenko 
> CC: Julien Grall 
>
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>
> Changes RFC -> V1:
>  - new patch
>
> Changes V1 -> V2:
>  - update patch description
>  - guard helper with CONFIG_IOREQ_SERVER
>  - remove "hvm" prefix
>  - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
>  - put suitable ASSERT()s
>  - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in 
> set_ioreq_server()
>  - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
> ---
>xen/arch/arm/traps.c| 15 +--
>xen/common/ioreq.c  |  7 ++-
>xen/include/xen/ioreq.h | 14 ++
>xen/include/xen/sched.h |  1 +
>4 files changed, 30 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 507c095..a8f5fdf 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
>struct vcpu *v = current;
>
>#ifdef CONFIG_IOREQ_SERVER
> -bool handled;
> +if ( domain_has_ioreq_server(v->domain) )
> +{
> +bool handled;
>
> -local_irq_enable();
> -handled = handle_io_completion(v);
> -local_irq_disable();
> +local_irq_enable();
> +handled = handle_io_completion(v);
> +local_irq_disable();
>
> -if ( !handled )
> -return true;
> +if ( !handled )
> +return true;
> +}
>#endif
>
>if ( likely(!v->arch.need_flush_to_ram) )
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index bcd4961..a72bc0e 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, 
> unsigned int id,
> struct ioreq_server *s)
>{
>ASSERT(id < MAX_NR_IOREQ_SERVERS);
> -ASSERT(!s || !d->ioreq_server.server[id]);
> +ASSERT(d->ioreq_server.server[id] ? !s : !!s);
 That looks odd. How about ASSERT(!s ^ !d->ioreq_server.server[id])?
>>> ok, looks like it will work.
>>>
>>>
 Paul

>d->ioreq_server.server[id] = s;
> +
> +if ( s )
> +d->ioreq_server.nr_servers++;
> +else
> +d->ioreq_server.nr_servers--;
>}
>
>#define GET_IOREQ_SERVER(d, id) \
> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
> index 7b03ab5..0679fef 100644
> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -55,6 +55,20 @@ struct ioreq_server {
>uint8_tbufioreq_handling;
>};
>
> +#ifdef CONFIG_IOREQ_SERVER
> +static inline bool domain_has_ioreq_server(const struct domain *d)
> +{
> +ASSERT((current->domain == d) || atomic_read(>pause_count));
> +
 This seems like an odd place to put such an assertion.
>>> I might miss something or interpreted incorrectly but these asserts are
>>> the result of how I understood the review comment on previous version [1].
>>>
>>> I will copy a comment here for the convenience:
>>> "This is safe only when d == current->domain and it's not paused,
>>> or when they're distinct and d is paused. Otherwise the result is
>>> stale before the caller can inspect it. This wants 

[linux-linus test] 156631: regressions - FAIL

2020-11-11 Thread osstest service owner
flight 156631 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156631/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine   6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-raw7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-shadow 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-arm64-arm64-examine  8 reboot   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-xl   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop   fail REGR. vs. 152332
 test-armhf-armhf-libvirt  8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-bootfail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd   8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot   fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-examine  8 reboot   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm   8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-xl   8 xen-boot fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds  8 xen-boot fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 

Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code run with PVH domains on ARM

2020-11-11 Thread Oleksandr Andrushchenko

On 11/11/20 2:31 PM, Roger Pau Monné wrote:
> On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko 
>>
>> According to https://wiki.xenproject.org/wiki/Linux_PVH:
>>
>> Items not supported by PVH
>>   - PCI pass through (as of Xen 4.10)
>>
>> Allow running PCI remove code on ARM and do not assert for PVH domains.
>>
>> Signed-off-by: Oleksandr Andrushchenko 
>> ---
>>   tools/libxl/Makefile| 4 
>>   tools/libxl/libxl_pci.c | 4 +++-
>>   2 files changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
>> index 241da7fff6f4..f3806aafcb4e 100644
>> --- a/tools/libxl/Makefile
>> +++ b/tools/libxl/Makefile
>> @@ -130,6 +130,10 @@ endif
>>   
>>   LIBXL_LIBS += -lyajl
>>   
>> +ifeq ($(CONFIG_ARM),y)
>> +CFALGS += -DCONFIG_ARM
>> +endif
>> +
>>   LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
>>  libxl_dom.o libxl_exec.o libxl_xshelp.o libxl_device.o \
>>  libxl_internal.o libxl_utils.o libxl_uuid.o \
>> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
>> index bc5843b13701..b93cf976642b 100644
>> --- a/tools/libxl/libxl_pci.c
>> +++ b/tools/libxl/libxl_pci.c
>> @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t 
>> domid,
>>   goto out_fail;
>>   }
>>   } else {
>> +/* PCI passthrough can also run on ARM PVH */
>> +#ifndef CONFIG_ARM
>>   assert(type == LIBXL_DOMAIN_TYPE_PV);
>> -
>> +#endif
> I would just remove the assert now if this is to be used by Arm and
> you don't need to fork the file for Arm.

Sounds good, I will drop then

But what would be the right explanation then? I mean why there was an ASSERT

and now it is safe (for x86) to remove that?

>
> Roger.

Thank you,

Oleksandr


Re: [PATCH 06/24] block: add a return value to set_capacity_and_notify

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

Return if the function ended up sending an uevent or not.

Signed-off-by: Christoph Hellwig 
---
  block/genhd.c | 7 +--
  include/linux/genhd.h | 2 +-
  2 files changed, 6 insertions(+), 3 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



Re: [PATCH 04/24] sd: update the bdev size in sd_revalidate_disk

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

This avoids the extra call to revalidate_disk_size in sd_rescan and
is otherwise a no-op because the size did not change, or we are in
the probe path.

Signed-off-by: Christoph Hellwig 
Acked-by: Martin K. Petersen 
---
  drivers/scsi/sd.c | 8 +++-
  1 file changed, 3 insertions(+), 5 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers.

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



Re: [PATCH] x86/vpt: fix build with old gcc

2020-11-11 Thread Andrew Cooper
On 11/11/2020 12:58, Jan Beulich wrote:
> I believe it was the XSA-336 fix (42fcdd42328f "x86/vpt: fix race when
> migrating timers between vCPUs") which has unmasked a bogus
> uninitialized variable warning. This is observable with gcc 4.3.4, but
> only on 4.13 and older; it's hidden on newer versions apparently due to
> the addition to _read_unlock() done by 12509bbeb9e3 ("rwlocks: call
> preempt_disable() when taking a rwlock").
>
> Signed-off-by: Jan Beulich 

Acked-by: Andrew Cooper 



Re: [PATCH 05/24] block: remove the update_bdev parameter from set_capacity_revalidate_and_notify

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

The update_bdev argument is always set to true, so remove it.  Also
rename the function to the slighly less verbose set_capacity_and_notify,
as propagating the disk size to the block device isn't really
revalidation.

Signed-off-by: Christoph Hellwig 
---
  block/genhd.c| 13 +
  drivers/block/loop.c | 11 +--
  drivers/block/virtio_blk.c   |  2 +-
  drivers/block/xen-blkfront.c |  2 +-
  drivers/nvme/host/core.c |  2 +-
  drivers/scsi/sd.c|  5 ++---
  include/linux/genhd.h|  3 +--
  7 files changed, 16 insertions(+), 22 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



Re: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update the bdev size

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

There is no good reason to call revalidate_disk_size separately.

Signed-off-by: Christoph Hellwig 
---
  drivers/nvme/host/core.c | 5 +
  1 file changed, 1 insertion(+), 4 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



[PATCH] x86/vpt: fix build with old gcc

2020-11-11 Thread Jan Beulich
I believe it was the XSA-336 fix (42fcdd42328f "x86/vpt: fix race when
migrating timers between vCPUs") which has unmasked a bogus
uninitialized variable warning. This is observable with gcc 4.3.4, but
only on 4.13 and older; it's hidden on newer versions apparently due to
the addition to _read_unlock() done by 12509bbeb9e3 ("rwlocks: call
preempt_disable() when taking a rwlock").

Signed-off-by: Jan Beulich 
---
Of course we could decide to only work around this on the older
branches. But I think it's better to have the fix everywhere (as long as
we still support such old gcc), as further changes may - effectively
randomly - unhide the warning again.

--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -401,7 +401,7 @@ int pt_update_irq(struct vcpu *v)
  * associated with the timer.
  */
 time_cb *cb = NULL;
-void *cb_priv;
+void *cb_priv = NULL;
 
 pt_vcpu_lock(v);
 /* Make sure the timer is still on the list. */



Re: [PATCH 02/24] loop: remove loop_set_size

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

Just use set_capacity_revalidate_and_notify directly, as this function
can update the block device size as well when the last parameter is set
to true.

Signed-off-by: Christoph Hellwig 
---
  drivers/block/loop.c | 37 +++--
  1 file changed, 7 insertions(+), 30 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



Re: [PATCH 01/24] block: remove the call to __invalidate_device in check_disk_size_change

2020-11-11 Thread Hannes Reinecke

On 11/11/20 9:26 AM, Christoph Hellwig wrote:

__invalidate_device without the kill_dirty parameter just invalidates
various clean entries in caches, which doesn't really help us with
anything, but can cause all kinds of horrible lock orders due to how
it calls into the file system.  The only reason this hasn't been a
major issue is because so many people use partitions, for which no
invalidation was performed anyway.

Signed-off-by: Christoph Hellwig 
---
  fs/block_dev.c | 6 --
  1 file changed, 6 deletions(-)


Reviewed-by: Hannes Reinecke 

Cheers,

Hannes
--
Dr. Hannes ReineckeKernel Storage Architect
h...@suse.de  +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



[PATCH] xen/x86: Work around Clang code generation bug with asm parameters

2020-11-11 Thread Andrew Cooper
Clang 9 and later don't handle the clobber of %r10 correctly in
_hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122

Rewrite the logic to use the "+r" form, rather than using separate input and
output specifiers, which works around the issue.  For consistency, adjust all
operand handling.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Roger Pau Monné 
CC: Wei Liu 
---
 xen/include/asm-x86/guest/xen-hcall.h | 35 ---
 1 file changed, 16 insertions(+), 19 deletions(-)

diff --git a/xen/include/asm-x86/guest/xen-hcall.h 
b/xen/include/asm-x86/guest/xen-hcall.h
index 03d5868a9e..3240d9807b 100644
--- a/xen/include/asm-x86/guest/xen-hcall.h
+++ b/xen/include/asm-x86/guest/xen-hcall.h
@@ -39,53 +39,50 @@
 
 #define _hypercall64_1(type, hcall, a1) \
 ({  \
-long res, tmp__;\
+long res, _a1 = (long)(a1); \
 asm volatile (  \
 "call hypercall_page + %c[offset]"  \
-: "=a" (res), "=D" (tmp__) ASM_CALL_CONSTRAINT  \
-: [offset] "i" (hcall * 32),\
-  "1" ((long)(a1))  \
+: "=a" (res), "+D" (_a1) ASM_CALL_CONSTRAINT\
+: [offset] "i" (hcall * 32) \
 : "memory" );   \
 (type)res;  \
 })
 
 #define _hypercall64_2(type, hcall, a1, a2) \
 ({  \
-long res, tmp__;\
+long res, _a1 = (long)(a1), _a2 = (long)(a2);   \
 asm volatile (  \
 "call hypercall_page + %c[offset]"  \
-: "=a" (res), "=D" (tmp__), "=S" (tmp__)\
+: "=a" (res), "+D" (_a1), "+S" (_a2)\
   ASM_CALL_CONSTRAINT   \
-: [offset] "i" (hcall * 32),\
-  "1" ((long)(a1)), "2" ((long)(a2))\
+: [offset] "i" (hcall * 32) \
 : "memory" );   \
 (type)res;  \
 })
 
 #define _hypercall64_3(type, hcall, a1, a2, a3) \
 ({  \
-long res, tmp__;\
+long res, _a1 = (long)(a1), _a2 = (long)(a2),   \
+_a3 = (long)(a3);   \
 asm volatile (  \
 "call hypercall_page + %c[offset]"  \
-: "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__)  \
+: "=a" (res), "+D" (_a1), "+S" (_a2), "+d" (_a3)\
   ASM_CALL_CONSTRAINT   \
-: [offset] "i" (hcall * 32),\
-  "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3))  \
+: [offset] "i" (hcall * 32) \
 : "memory" );   \
 (type)res;  \
 })
 
 #define _hypercall64_4(type, hcall, a1, a2, a3, a4) \
 ({  \
-long res, tmp__;\
-register long _a4 asm ("r10") = ((long)(a4));   \
+long res, _a1 = (long)(a1), _a2 = (long)(a2),   \
+_a3 = (long)(a3);   \
+register long _a4 asm ("r10") = (long)(a4); \
 asm volatile (  \
 "call hypercall_page + %c[offset]"  \
-: "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__), \
-  "=" (tmp__) ASM_CALL_CONSTRAINT \
-: [offset] "i" (hcall * 32),\
-  "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)), \
-  "4" (_a4) \
+ 

Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code run with PVH domains on ARM

2020-11-11 Thread Roger Pau Monné
On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
> 
> According to https://wiki.xenproject.org/wiki/Linux_PVH:
> 
> Items not supported by PVH
>  - PCI pass through (as of Xen 4.10)
> 
> Allow running PCI remove code on ARM and do not assert for PVH domains.
> 
> Signed-off-by: Oleksandr Andrushchenko 
> ---
>  tools/libxl/Makefile| 4 
>  tools/libxl/libxl_pci.c | 4 +++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 241da7fff6f4..f3806aafcb4e 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -130,6 +130,10 @@ endif
>  
>  LIBXL_LIBS += -lyajl
>  
> +ifeq ($(CONFIG_ARM),y)
> +CFALGS += -DCONFIG_ARM
> +endif
> +
>  LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
>   libxl_dom.o libxl_exec.o libxl_xshelp.o libxl_device.o \
>   libxl_internal.o libxl_utils.o libxl_uuid.o \
> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index bc5843b13701..b93cf976642b 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t 
> domid,
>  goto out_fail;
>  }
>  } else {
> +/* PCI passthrough can also run on ARM PVH */
> +#ifndef CONFIG_ARM
>  assert(type == LIBXL_DOMAIN_TYPE_PV);
> -
> +#endif

I would just remove the assert now if this is to be used by Arm and
you don't need to fork the file for Arm.

Roger.



Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook

2020-11-11 Thread Roger Pau Monné
On Tue, Nov 10, 2020 at 03:50:44PM +0100, Jan Beulich wrote:
> On 10.11.2020 14:59, Roger Pau Monné wrote:
> > On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
> >> --- a/xen/arch/x86/mm/hap/nested_hap.c
> >> +++ b/xen/arch/x86/mm/hap/nested_hap.c
> >> @@ -71,24 +71,11 @@
> >>  /*NESTED VIRT P2M FUNCTIONS */
> >>  //
> >>  
> >> -int
> >> -nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> >> -l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
> >> +void
> >> +nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int 
> >> oflags)
> >>  {
> >> -struct domain *d = p2m->domain;
> >> -uint32_t old_flags;
> >> -
> >> -paging_lock(d);
> >> -
> >> -old_flags = l1e_get_flags(*p);
> >> -safe_write_pte(p, new);
> >> -
> >> -if (old_flags & _PAGE_PRESENT)
> >> -guest_flush_tlb_mask(d, p2m->dirty_cpumask);
> >> -
> >> -paging_unlock(d);
> >> -
> >> -return 0;
> >> +if ( oflags & _PAGE_PRESENT )
> >> +guest_flush_tlb_mask(p2m->domain, p2m->dirty_cpumask);
> >>  }
> > 
> > This is a verbatimi copy of hap_write_p2m_entry_post. I assume there's
> > a reason why we need both, but I'm missing it.
> 
> Only almost, since HAP has
> 
> if ( oflags & _PAGE_PRESENT )
> guest_flush_tlb_mask(d, d->dirty_cpumask);
> 
> instead (i.e. they differ in which dirty_cpumask gets used).

Sorry, missed that bit.

> >> --- a/xen/arch/x86/mm/p2m-pt.c
> >> +++ b/xen/arch/x86/mm/p2m-pt.c
> >> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
> >>  {
> >>  struct domain *d = p2m->domain;
> >>  struct vcpu *v = current;
> >> -int rc = 0;
> >>  
> >>  if ( v->domain != d )
> >>  v = d->vcpu ? d->vcpu[0] : NULL;
> >>  if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
> >>   p2m_is_nestedp2m(p2m) )
> >> -rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
> >> +{
> >> +unsigned int oflags;
> >> +mfn_t omfn;
> >> +int rc;
> >> +
> >> +paging_lock(d);
> >> +
> >> +if ( p2m->write_p2m_entry_pre )
> >> +p2m->write_p2m_entry_pre(d, gfn, p, new, level);
> >> +
> >> +oflags = l1e_get_flags(*p);
> >> +omfn = l1e_get_mfn(*p);
> >> +
> >> +rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
> >> +  p2m_flags_to_type(oflags), l1e_get_mfn(new),
> >> +  omfn, level);
> >> +if ( rc )
> >> +{
> >> +paging_unlock(d);
> >> +return rc;
> >> +}
> >> +
> >> +safe_write_pte(p, new);
> >> +
> >> +if ( p2m->write_p2m_entry_post )
> >> +p2m->write_p2m_entry_post(p2m, oflags);
> >> +
> >> +paging_unlock(d);
> >> +
> >> +if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
> >> + (oflags & _PAGE_PRESENT) &&
> >> + !p2m_get_hostp2m(d)->defer_nested_flush &&
> >> + /*
> >> +  * We are replacing a valid entry so we need to flush nested 
> >> p2ms,
> >> +  * unless the only change is an increase in access rights.
> >> +  */
> >> + (!mfn_eq(omfn, l1e_get_mfn(new)) ||
> >> +  !perms_strictly_increased(oflags, l1e_get_flags(new))) )
> >> +p2m_flush_nestedp2m(d);
> > 
> > It feel slightly weird to have a nested p2m hook post, and yet have
> > nested specific code here.
> > 
> > Have you considered if the post hook could be moved outside of the
> > locked region, so that we could put this chunk there in the nested p2m
> > case?
> 
> Yes, I did, but I don't think the post hook can be moved out. The
> only alternative therefore would be a 3rd hook. And this hook would
> then need to be installed on the host p2m for nested guests, as
> opposed to nestedp2m_write_p2m_entry_post, which gets installed in
> the nested p2m-s. As said in the description, the main reason I
> decided against a 3rd hook is that I suppose the code here isn't
> HAP-specific (while prior to this patch it was).

I'm not convinced the guest TLB flush needs to be performed while
holding the paging lock. The point of such flush is to invalidate any
intermediate guest visible translations that might now be invalid as a
result of the p2m change, but the paging lock doesn't affect the guest
in any way.

It's true that the dirty_cpumask might change, but I think we only
care that when returning from the function there are no stale cache
entries that contain the now invalid translation, and this can be
achieved equally by doing the flush outside of the locked region.

Roger.



[ovmf test] 156632: all pass - PUSHED

2020-11-11 Thread osstest service owner
flight 156632 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156632/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 8c610e6075f2a200400970698a810a57ad49220e
baseline version:
 ovmf 0af7f8e6a9253960ba820cd6ddfd8c36543d30cb

Last test of basis   156606  2020-11-10 00:39:48 Z1 days
Testing same since   156632  2020-11-10 17:53:05 Z0 days1 attempts


People who touched revisions under test:
  Liming Gao 
  Michael Kubacki 
  Mingyue Liang 
  Yunhua Feng 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0af7f8e6a9..8c610e6075  8c610e6075f2a200400970698a810a57ad49220e -> 
xen-tested-master



Re: [PATCH] docs: fix documentation about default scheduler

2020-11-11 Thread Roger Pau Monné
On Wed, Nov 11, 2020 at 12:45:38PM +0100, Jürgen Groß wrote:
> On 11.11.20 12:37, Roger Pau Monné wrote:
> > On Wed, Nov 11, 2020 at 12:24:50PM +0100, George Dunlap wrote:
> > > 
> > > 
> > > > On Nov 11, 2020, at 11:10 AM, Andrew Cooper  
> > > > wrote:
> > > > 
> > > > On 11/11/2020 10:12, George Dunlap wrote:
> > > > > 
> > > > > > On Nov 10, 2020, at 6:51 PM, Roger Pau Monne  
> > > > > > wrote:
> > > > > > 
> > > > > > Fix the command line document to account for the default scheduler 
> > > > > > not
> > > > > > being credit anymore likely, and the fact that it's selectable from
> > > > > > Kconfig and thus different builds could end up with different 
> > > > > > default
> > > > > > schedulers.
> > > > > > 
> > > > > > Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> > > > > > Signed-off-by: Roger Pau Monné 
> > > > > > ---
> > > > > > Changes since v1:
> > > > > > - Point that the default scheduler is being selected by Kconfig,
> > > > > >   don't mention the default Kconfig selection.
> > > > > > ---
> > > > > > docs/misc/xen-command-line.pandoc | 2 +-
> > > > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > > 
> > > > > > diff --git a/docs/misc/xen-command-line.pandoc 
> > > > > > b/docs/misc/xen-command-line.pandoc
> > > > > > index 4ae9391fcd..eb1db25f92 100644
> > > > > > --- a/docs/misc/xen-command-line.pandoc
> > > > > > +++ b/docs/misc/xen-command-line.pandoc
> > > > > > @@ -1876,7 +1876,7 @@ with read and write permissions.
> > > > > > ### sched
> > > > > > > `= credit | credit2 | arinc653 | rtds | null`
> > > > > > -> Default: `sched=credit`
> > > > > > +> Default: selectable via Kconfig.  Depends on enabled schedulers.
> > > > > Sorry for not weighing in earlier; but this basically makes this 
> > > > > documentation useless.
> > > > 
> > > > No.  It is the only half useable version, by being the only version
> > > > which isn't misleading.
> > > 
> > > The version in this patch essentially says “This has some options, it 
> > > also has a default, but we’re not going to tell you any of them, nor what 
> > > your default most likely is.  Go read KConfig to find out.”  This is is 
> > > completely useless to the person trying to figure out what the default is 
> > > and what potential alternate values they can put here.
> > > 
> > > The vast majority of people who search for this on the internet will have 
> > > that list of options, and have credit2=default.  As long as we tell them 
> > > that a local configuration can override the available options and the 
> > > default, people are smart enough to figure out what their system is doing.
> > > 
> > > > It would however be far better to name the CONFIG_ variable (we do
> > > > elsewhere in this doc) in question so people can actually figure out
> > > > what they've got in front of them.
> > > 
> > > Something like that would be even better, if Roger (or someone) wants to 
> > > write it up.
> > 
> > I'm happy to send an updated version, but would like to have some
> > agreement before doing so. Is the text below acceptable to everyone?
> > 
> > ### sched
> > > `= credit | credit2 | arinc653 | rtds | null`
> > 
> > > Default: `sched=credit2`
> > 
> > Choose the default scheduler. Note the default scheduler is selectable via
> > Kconfig and depends on enabled schedulers. Check
> 
> ... CONFIG_SCHED_DEFAULT to see which scheduler is the default.
> 
> CONFIG_SCHED_{scheduler_name} specify which schedulers are available.

Hm, that's weird. When I hit help in menuconfig for the default
scheduler selection it reports the option is named
SCHED_{name}_DEFAULT. Will change it.

Roger.



Re: [PATCH] docs: fix documentation about default scheduler

2020-11-11 Thread Jürgen Groß

On 11.11.20 12:37, Roger Pau Monné wrote:

On Wed, Nov 11, 2020 at 12:24:50PM +0100, George Dunlap wrote:




On Nov 11, 2020, at 11:10 AM, Andrew Cooper  wrote:

On 11/11/2020 10:12, George Dunlap wrote:



On Nov 10, 2020, at 6:51 PM, Roger Pau Monne  wrote:

Fix the command line document to account for the default scheduler not
being credit anymore likely, and the fact that it's selectable from
Kconfig and thus different builds could end up with different default
schedulers.

Fixes: dafd936dddbd ('Make credit2 the default scheduler')
Signed-off-by: Roger Pau Monné 
---
Changes since v1:
- Point that the default scheduler is being selected by Kconfig,
  don't mention the default Kconfig selection.
---
docs/misc/xen-command-line.pandoc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/misc/xen-command-line.pandoc 
b/docs/misc/xen-command-line.pandoc
index 4ae9391fcd..eb1db25f92 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1876,7 +1876,7 @@ with read and write permissions.
### sched

`= credit | credit2 | arinc653 | rtds | null`

-> Default: `sched=credit`
+> Default: selectable via Kconfig.  Depends on enabled schedulers.

Sorry for not weighing in earlier; but this basically makes this documentation 
useless.


No.  It is the only half useable version, by being the only version
which isn't misleading.


The version in this patch essentially says “This has some options, it also has 
a default, but we’re not going to tell you any of them, nor what your default 
most likely is.  Go read KConfig to find out.”  This is is completely useless 
to the person trying to figure out what the default is and what potential 
alternate values they can put here.

The vast majority of people who search for this on the internet will have that 
list of options, and have credit2=default.  As long as we tell them that a 
local configuration can override the available options and the default, people 
are smart enough to figure out what their system is doing.


It would however be far better to name the CONFIG_ variable (we do
elsewhere in this doc) in question so people can actually figure out
what they've got in front of them.


Something like that would be even better, if Roger (or someone) wants to write 
it up.


I'm happy to send an updated version, but would like to have some
agreement before doing so. Is the text below acceptable to everyone?

### sched

`= credit | credit2 | arinc653 | rtds | null`



Default: `sched=credit2`


Choose the default scheduler. Note the default scheduler is selectable via
Kconfig and depends on enabled schedulers. Check


... CONFIG_SCHED_DEFAULT to see which scheduler is the default.

CONFIG_SCHED_{scheduler_name} specify which schedulers are available.


Juergen


OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] docs: fix documentation about default scheduler

2020-11-11 Thread Roger Pau Monné
On Wed, Nov 11, 2020 at 12:24:50PM +0100, George Dunlap wrote:
> 
> 
> > On Nov 11, 2020, at 11:10 AM, Andrew Cooper  
> > wrote:
> > 
> > On 11/11/2020 10:12, George Dunlap wrote:
> >> 
> >>> On Nov 10, 2020, at 6:51 PM, Roger Pau Monne  wrote:
> >>> 
> >>> Fix the command line document to account for the default scheduler not
> >>> being credit anymore likely, and the fact that it's selectable from
> >>> Kconfig and thus different builds could end up with different default
> >>> schedulers.
> >>> 
> >>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> >>> Signed-off-by: Roger Pau Monné 
> >>> ---
> >>> Changes since v1:
> >>> - Point that the default scheduler is being selected by Kconfig,
> >>>  don't mention the default Kconfig selection.
> >>> ---
> >>> docs/misc/xen-command-line.pandoc | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>> 
> >>> diff --git a/docs/misc/xen-command-line.pandoc 
> >>> b/docs/misc/xen-command-line.pandoc
> >>> index 4ae9391fcd..eb1db25f92 100644
> >>> --- a/docs/misc/xen-command-line.pandoc
> >>> +++ b/docs/misc/xen-command-line.pandoc
> >>> @@ -1876,7 +1876,7 @@ with read and write permissions.
> >>> ### sched
>  `= credit | credit2 | arinc653 | rtds | null`
> >>> -> Default: `sched=credit`
> >>> +> Default: selectable via Kconfig.  Depends on enabled schedulers.
> >> Sorry for not weighing in earlier; but this basically makes this 
> >> documentation useless.
> > 
> > No.  It is the only half useable version, by being the only version
> > which isn't misleading.
> 
> The version in this patch essentially says “This has some options, it also 
> has a default, but we’re not going to tell you any of them, nor what your 
> default most likely is.  Go read KConfig to find out.”  This is is completely 
> useless to the person trying to figure out what the default is and what 
> potential alternate values they can put here.
> 
> The vast majority of people who search for this on the internet will have 
> that list of options, and have credit2=default.  As long as we tell them that 
> a local configuration can override the available options and the default, 
> people are smart enough to figure out what their system is doing.
> 
> > It would however be far better to name the CONFIG_ variable (we do
> > elsewhere in this doc) in question so people can actually figure out
> > what they've got in front of them.
> 
> Something like that would be even better, if Roger (or someone) wants to 
> write it up.

I'm happy to send an updated version, but would like to have some
agreement before doing so. Is the text below acceptable to everyone?

### sched
> `= credit | credit2 | arinc653 | rtds | null`

> Default: `sched=credit2`

Choose the default scheduler. Note the default scheduler is selectable via
Kconfig and depends on enabled schedulers. Check
`CONFIG_SCHED_{scheduler_name}_DEFAULT` when you building Xen to adjust the
default scheduler.

Roger.



Re: [PATCH v3 5/7] x86: guard against straight-line speculation past RET

2020-11-11 Thread Roger Pau Monné
On Tue, Nov 10, 2020 at 03:32:43PM +0100, Jan Beulich wrote:
> On 10.11.2020 15:08, Roger Pau Monné wrote:
> > On Tue, Nov 10, 2020 at 02:19:40PM +0100, Jan Beulich wrote:
> >> On 10.11.2020 12:16, Roger Pau Monné wrote:
> >>> On Tue, Nov 10, 2020 at 11:06:46AM +0100, Jan Beulich wrote:
>  On 10.11.2020 10:31, Roger Pau Monné wrote:
> > On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> >> Under certain conditions CPUs can speculate into the instruction stream
> >> past a RET instruction. Guard against this just like 3b7dab93f240
> >> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> >> did - by inserting an "INT $3" insn. It's merely the mechanics of how 
> >> to
> >> achieve this that differ: A set of macros gets introduced to post-
> >> process RET insns issued by the compiler (or living in assembly files).
> >>
> >> Unfortunately for clang this requires further features their built-in
> >> assembler doesn't support: We need to be able to override insn 
> >> mnemonics
> >> produced by the compiler (which may be impossible, if internally
> >> assembly mnemonics never get generated), and we want to use \(text)
> >> escaping / quoting in the auxiliary macro.
> >
> > Could this have an option to enable/disable at build time?
> 
>  Well, a subsequent patch adds a config option for this, which in
>  the worst case could be turned off. I'm afraid though I'm not
>  clear about the question, because ...
> 
> > FreeBSD will drop GNU as quite soon from base, and albeit it can be
> > installed as a package I would like to be able to build Xen using a
> > toolchain based on LLVM exclusively.
> 
>  ... it's not clear to me what the implications here are: Are you
>  saying -no-integrated-as is not going to function anymore, unless
>  people explicitly install gas? If that's not what you meant to
>  indicate, then I don't see how building would become impossible.
> >>>
> >>> I'm still inquiring about this, but I would say that when gas is
> >>> removed from FreeBSD then the 'as' command would be mapped to llvm-as,
> >>> and thus -no-integrated-as would hit the same issues as the integrated
> >>> as. So far in Xen we have assumed that -no-integrated-as would
> >>> fallback to an as capable of doing what the integrated clang as
> >>> doesn't support, but that might not be the case.
> >>
> >> At which point Xen couldn't be built anyway, I expect. If llvm-as
> >> isn't sufficiently gas-compatible, we've lost (right now at least).
> >>
> >>> Ideally we would have to re-run the tests with -no-integrated-as, in
> >>> order to assert that the external as is really capable of what the
> >>> internal one is not.
> >>
> >> And if it doesn't, what would we do other than failing the build
> >> (which it would also if we didn't do the 2nd round of checks)?
> > 
> > I would always prefer a clear message (ie: your toolstack is not
> > capable of building Xen) rather than a weird build time failure.
> 
> Fair point in general.
> 
> > Also we could maybe disable certain options by default if the
> > toolstack doesn't have the required support to build them?
> 
> We could, but I'm afraid this will go down the route of embedding
> tool chain capabilities in xen/.config, which I continue to not
> consider a good idea (and the thread got stalled, as expected).
> 
> In fact (also to Andrew and Anthony), recently I've become aware
> of another shortcoming of this model: Our kernel packages contain
> .config files for the various architectures and specific per-
> architecture flavors. It used to be easy to update them on any
> system, until the tool chain capability checks got introduced.
> Now, in order to update them, one has to use the precise versions
> of the various tool chain parts that will be used on the build
> hosts, or else an error may result (for unexpected changes to
> the file), or one may unknowingly turn off options that are
> expected to be enabled.

I think the options should only be set based on toolchain capabilities
when there's no .config. If there's an existing .config we should just
check whether the toolchain is capable of building the selected set of
options, or else report an error.

I guess this would apply to defconfig selecting options based on
toolchain capabilties.

> Put more generally - if I handed someone a specific .config, I'd
> expect their resulting binary to contain what I did set up. Or
> for them to report back that they can't build the thing. But it
> should not be the case that the .config got silently changed and
> certain functionality disabled just because they use a different
> tool chain.

Yes, I agree with this.

> > Has anyone reported this shortcoming to upstream llvm, so they are
> > aware and can work on this or maybe provide an alternative way to
> > achieve the same result?
> 
> I didn't and I'm unaware of anyone else possibly 

Re: [PATCH] docs: fix documentation about default scheduler

2020-11-11 Thread George Dunlap


> On Nov 11, 2020, at 11:10 AM, Andrew Cooper  wrote:
> 
> On 11/11/2020 10:12, George Dunlap wrote:
>> 
>>> On Nov 10, 2020, at 6:51 PM, Roger Pau Monne  wrote:
>>> 
>>> Fix the command line document to account for the default scheduler not
>>> being credit anymore likely, and the fact that it's selectable from
>>> Kconfig and thus different builds could end up with different default
>>> schedulers.
>>> 
>>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
>>> Signed-off-by: Roger Pau Monné 
>>> ---
>>> Changes since v1:
>>> - Point that the default scheduler is being selected by Kconfig,
>>>  don't mention the default Kconfig selection.
>>> ---
>>> docs/misc/xen-command-line.pandoc | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>> 
>>> diff --git a/docs/misc/xen-command-line.pandoc 
>>> b/docs/misc/xen-command-line.pandoc
>>> index 4ae9391fcd..eb1db25f92 100644
>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -1876,7 +1876,7 @@ with read and write permissions.
>>> ### sched
 `= credit | credit2 | arinc653 | rtds | null`
>>> -> Default: `sched=credit`
>>> +> Default: selectable via Kconfig.  Depends on enabled schedulers.
>> Sorry for not weighing in earlier; but this basically makes this 
>> documentation useless.
> 
> No.  It is the only half useable version, by being the only version
> which isn't misleading.

The version in this patch essentially says “This has some options, it also has 
a default, but we’re not going to tell you any of them, nor what your default 
most likely is.  Go read KConfig to find out.”  This is is completely useless 
to the person trying to figure out what the default is and what potential 
alternate values they can put here.

The vast majority of people who search for this on the internet will have that 
list of options, and have credit2=default.  As long as we tell them that a 
local configuration can override the available options and the default, people 
are smart enough to figure out what their system is doing.

> It would however be far better to name the CONFIG_ variable (we do
> elsewhere in this doc) in question so people can actually figure out
> what they've got in front of them.

Something like that would be even better, if Roger (or someone) wants to write 
it up.

 -George

Re: [PATCH v3 5/7] x86: guard against straight-line speculation past RET

2020-11-11 Thread Roger Pau Monné
On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> Under certain conditions CPUs can speculate into the instruction stream
> past a RET instruction. Guard against this just like 3b7dab93f240
> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> achieve this that differ: A set of macros gets introduced to post-
> process RET insns issued by the compiler (or living in assembly files).
> 
> Unfortunately for clang this requires further features their built-in
> assembler doesn't support: We need to be able to override insn mnemonics
> produced by the compiler (which may be impossible, if internally
> assembly mnemonics never get generated), and we want to use \(text)
> escaping / quoting in the auxiliary macro.
> 
> Signed-off-by: Jan Beulich 
> Acked-by: Roger Pau Monné 
> ---
> TBD: Would be nice to avoid the additions in .init.text, but a query to
>  the binutils folks regarding the ability to identify the section
>  stuff is in (by Peter Zijlstra over a year ago:
>  https://sourceware.org/pipermail/binutils/2019-July/107528.html)
>  has been left without helpful replies.
> ---
> v3: Use .byte 0xc[23] instead of the nested macros.
> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
> 
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
>  # https://bugs.llvm.org/show_bug.cgi?id=36110
>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile 
> $(open)".macro FOO;.endm",-no-integrated-as)
>  
> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
> +# Check whether \(text) escaping in macro bodies is supported.
> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 
> 8",,-no-integrated-as)
> +
> +# Check whether macros can override insn mnemonics in inline assembly.
> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; 
> .endm",-no-integrated-as)

I was going over this to post a bug report to LLVM, but it seems like
gcc also doesn't overwrite ret when using the above snippet:

https://godbolt.org/z/oqsPTv

Roger.



Re: [PATCH] docs: fix documentation about default scheduler

2020-11-11 Thread Andrew Cooper
On 11/11/2020 10:12, George Dunlap wrote:
>
>> On Nov 10, 2020, at 6:51 PM, Roger Pau Monne  wrote:
>>
>> Fix the command line document to account for the default scheduler not
>> being credit anymore likely, and the fact that it's selectable from
>> Kconfig and thus different builds could end up with different default
>> schedulers.
>>
>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
>> Signed-off-by: Roger Pau Monné 
>> ---
>> Changes since v1:
>> - Point that the default scheduler is being selected by Kconfig,
>>   don't mention the default Kconfig selection.
>> ---
>> docs/misc/xen-command-line.pandoc | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/docs/misc/xen-command-line.pandoc 
>> b/docs/misc/xen-command-line.pandoc
>> index 4ae9391fcd..eb1db25f92 100644
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -1876,7 +1876,7 @@ with read and write permissions.
>> ### sched
>>> `= credit | credit2 | arinc653 | rtds | null`
>> -> Default: `sched=credit`
>> +> Default: selectable via Kconfig.  Depends on enabled schedulers.
> Sorry for not weighing in earlier; but this basically makes this 
> documentation useless.

No.  It is the only half useable version, by being the only version
which isn't misleading.

It would however be far better to name the CONFIG_ variable (we do
elsewhere in this doc) in question so people can actually figure out
what they've got in front of them.

~Andrew



[xen-unstable-smoke test] 156679: tolerable all pass - PUSHED

2020-11-11 Thread osstest service owner
flight 156679 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156679/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  69634224afaf84474f04e1ab050f216d66bcda68
baseline version:
 xen  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156622  2020-11-10 13:01:19 Z0 days
Failing since156628  2020-11-10 17:00:28 Z0 days6 attempts
Testing same since   156679  2020-11-11 08:00:30 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Jan Beulich 
  Juergen Gross 
  Julien Grall 
  Roger Pau Monné 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3059178798..69634224af  69634224afaf84474f04e1ab050f216d66bcda68 -> smoke



Re: [PATCH] docs: fix documentation about default scheduler

2020-11-11 Thread George Dunlap


> On Nov 10, 2020, at 6:51 PM, Roger Pau Monne  wrote:
> 
> Fix the command line document to account for the default scheduler not
> being credit anymore likely, and the fact that it's selectable from
> Kconfig and thus different builds could end up with different default
> schedulers.
> 
> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> Signed-off-by: Roger Pau Monné 
> ---
> Changes since v1:
> - Point that the default scheduler is being selected by Kconfig,
>   don't mention the default Kconfig selection.
> ---
> docs/misc/xen-command-line.pandoc | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/docs/misc/xen-command-line.pandoc 
> b/docs/misc/xen-command-line.pandoc
> index 4ae9391fcd..eb1db25f92 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1876,7 +1876,7 @@ with read and write permissions.
> ### sched
>> `= credit | credit2 | arinc653 | rtds | null`
> 
> -> Default: `sched=credit`
> +> Default: selectable via Kconfig.  Depends on enabled schedulers.

Sorry for not weighing in earlier; but this basically makes this documentation 
useless.

I’d rather say:

—>8

= credit | credit2 …

Default: sched=credit2

NB that default scheduler and schedulers available can be modified via Kconfig.

8<

 -George

  1   2   >