Re: Problems with APIC on versions 4.9 and later (4.8 works)

2021-01-31 Thread Jan Beulich
On 29.01.2021 20:31, Claudemir Todo Bom wrote:
> Em sex., 29 de jan. de 2021 às 13:24, Jan Beulich  
> escreveu:
>>
>> On 28.01.2021 14:08, Claudemir Todo Bom wrote:
>>> Em qui., 28 de jan. de 2021 às 06:49, Jan Beulich  
>>> escreveu:

 On 28.01.2021 10:47, Jan Beulich wrote:
> On 26.01.2021 14:03, Claudemir Todo Bom wrote:
>> If this information is good for more tests, please send the patch and
>> I will test it!
>
> Here you go. For simplifying analysis it may be helpful if you
> could limit the number of CPUs in use, e.g. by "maxcpus=4" or
> at least "smt=0". Provided the problem still reproduces with
> such options, of course.

 Speaking of command line options - it doesn't look like you have
 told us what else you have on the Xen command line, and without
 a serial log this isn't visible (e.g. in your video).
>>>
>>> All tests are done with xen command line:
>>>
>>> dom0_mem=1024M,max:2048M dom0_max_vcpus=4 dom0_vcpus_pin=true
>>> smt=false vga=text-80x50,keep
>>>
>>> and kernel command line:
>>>
>>> loglevel=0 earlyprintk=xen nomodeset
>>>
>>> this way I can get all xen messages on console.
>>>
>>> Attached are the frames I captured from a video, I manually selected
>>> them starting from the first readable frame.
>>
>> I've just sent a pair of patches, with you Cc-ed on the 2nd one.
>> Please give that one a try, with or without the updated debugging
>> patch below. In case of problems I'd of course want to see the
>> output from the debugging patch as well. I think it's up to you
>> whether you also use the first patch from that series - afaict it
>> shouldn't directly affect your case, but I may be wrong.
> 
> I've applied both patches, system didn't booted, used following parameters:
> 
> xen: dom0_mem=1024M,max:2048M dom0_max_vcpus=4 dom0_vcpus_pin=true smt=true
> kernel: loglevel=3
> 
> The screen cleared right after the initial xen messages and frozen
> there for a few minutes until I restarted the system.
> 
> I've added "vga=text-80x25,keep" to the xen command line and
> "nomodeset" to the kernel command line, hoping to get some more info
> and surprisingly this was sufficient to make system boot!

Odd, but as per my reply to the patch submission itself a
few minutes ago, over the weekend I realized a flaw. I do
think this explains the anomalies seen from the log between
CPU0 and and all other CPUs; the problem merely isn't as
severe anymore as it was before as it seems. I also did
realize I ought to be able to mimic your system's behavior;
if so I ought to be able to send out an updated series that
actually had some testing for the specific case. Later
today, hopefully.

Jan



Re: [PATCH RFC 2/2] x86/time: don't move TSC backwards in time_calibration_tsc_rendezvous()

2021-01-31 Thread Jan Beulich
On 29.01.2021 17:20, Jan Beulich wrote:
> @@ -1696,6 +1696,21 @@ static void time_calibration_tsc_rendezv
>  r->master_stime = read_platform_stime(NULL);
>  r->master_tsc_stamp = rdtsc_ordered();
>  }
> +else if ( r->master_tsc_stamp < r->max_tsc_stamp )
> +{
> +/*
> + * We want to avoid moving the TSC backwards for any CPU.
> + * Use the largest value observed anywhere on the first
> + * iteration and bump up our previously recorded system
> + * accordingly.
> + */
> +uint64_t delta = r->max_tsc_stamp - r->master_tsc_stamp;
> +
> +r->master_stime += scale_delta(delta,
> +   
> _cpu(cpu_time).tsc_scale);
> +r->master_tsc_stamp = r->max_tsc_stamp;
> +}

I went too far here - adjusting ->master_stime like this is
a mistake. Especially in extreme cases like Claudemir's this
can lead to the read_platform_stime() visible in context
above reading a value behind the previously recorded one,
leading to NOW() moving backwards (temporarily).

Instead of this I think I will want to move the call to
read_platform_stime() to the last iteration, such that the
gap between the point in time when it was taken and the
point in time the TSCs start counting from their new values
gets minimized. In fact I intend that to also do away with
the unnecessary reading back of the TSC in
time_calibration_rendezvous_tail() - we already know the
closest TSC value we can get hold of (without calculations),
which is the one we wrote a few cycles back.

Jan



[linux-linus test] 158868: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158868 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158868/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine   6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt   7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-libvirt-xsm   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-raw7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-shadow 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen  fail REGR. vs. 152332
 test-arm64-arm64-examine  8 reboot   fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop  fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop   fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt 14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-startfail REGR. vs. 152332
 test-arm64-arm64-xl   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm   8 xen-boot fail REGR. vs. 152332
 test-armhf-armhf-xl  14 guest-start  fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 14 guest-start  fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 

Re: Null scheduler and vwfi native problem

2021-01-31 Thread Anders Törnqvist

On 1/30/21 6:59 PM, Dario Faggioli wrote:

On Fri, 2021-01-29 at 09:08 +0100, Anders Törnqvist wrote:

On 1/26/21 11:31 PM, Dario Faggioli wrote:

Thanks again for letting us see these logs.

Thanks for the attention to this :-)

Any ideas for how to solve it?


So, you're up for testing patches, right?

Absolutely. I will apply them and be back with the results. :-)



How about applying these two, and letting me know what happens? :-D

They are on top of current staging. I can try to rebase on something
else, if it's easier for you to test.

Besides being attached, they're also available here:

https://gitlab.com/xen-project/people/dfaggioli/xen/-/tree/rcu-quiet-fix

I could not test them properly on ARM, as I don't have an ARM system
handy, so everything is possible really... just let me know.

It should at least build fine, AFAICT from here:

https://gitlab.com/xen-project/people/dfaggioli/xen/-/pipelines/249101213

Julien, back in:

  
https://lore.kernel.org/xen-devel/315740e1-3591-0e11-923a-718e06c36...@arm.com/


you said I should hook in enter_hypervisor_head(),
leave_hypervisor_tail(). Those functions are gone now and looking at
how the code changed, this is where I figured I should put the calls
(see the second patch). But feel free to educate me otherwise.

For x86 people that are listening... Do we have, in our beloved arch,
equally handy places (i.e., right before leaving Xen for a guest and
right after entering Xen from one), preferrably in a C file, and for
all guests... like it seems to be the case on ARM?

Regards






Re: Null scheduler and vwfi native problem

2021-01-31 Thread Anders Törnqvist

On 1/29/21 11:16 AM, Dario Faggioli wrote:

On Fri, 2021-01-29 at 09:18 +0100, Jürgen Groß wrote:

On 29.01.21 09:08, Anders Törnqvist wrote:

So it using it has only downsides (and that's true in general, if
you
ask me, but particularly so if using NULL).

Thanks for the feedback.
I removed dom0_vcpus_pin. And, as you said, it seems to be
unrelated to
the problem we're discussing.

Right. Don't put it back, and stay away from it, if you accept an
advice. :-)


The system still behaves the same.


Yeah, that was expected.


When the dom0_vcpus_pin is removed. xl vcpu-list looks like this:

Name    ID  VCPU   CPU State Time(s)
Affinity (Hard / Soft)
Domain-0 0 0    0   r--  29.4
all / all
Domain-0 0 1    1   r--  28.7
all / all
Domain-0 0 2    2   r--  28.7
all / all
Domain-0 0 3    3   r--  28.6
all / all
Domain-0 0 4    4   r--  28.6
all / all
mydomu      1 0    5   r--  21.6 5
/ all


Right, and it makes sense for it to look like this.


  From this listing (with "all" as hard affinity for dom0) one might
read
it like dom0 is not pinned with hard affinity to any specific pCPUs
at
all but mudomu is pinned to pCPU 5.
Will the dom0_max_vcpus=5 in this case guarantee that dom0 only
will run
on pCPU 0-4 so that mydomu always will have pCPU 5 for itself only?

No.


Well, yes... if you use the NULL scheduler. Which is in use here. :-)

Basically, the NULL scheduler _always_ assign one and only one vCPU to
each pCPU. This happens at domain (well, at the vCPU) creation time.
And it _never_ move a vCPU away from the pCPU to which it has assigned
it.

And it also _never_ change this vCPU-->pCPU assignment/relationship,
unless some special event happens (such as, either the vCPU and/or the
pCPU goes offline, is removed from the cpupool, you change the affinity
[as I'll explain below], etc).

This is the NULL scheduler's mission and only job, so it does that by
default, _without_ any need for an affinity to be specified.

So, how can affinity be useful in the NULL scheduler? Well, it's useful
if you want to control and decide to what pCPU a certain vCPU should
go.

So, let's make an example. Let's say you are in this situation:

NameID  VCPU   CPU State Time(s) Affinity (Hard 
/ Soft)
Domain-0 0 00   r-- 29.4   all / all
Domain-0 0 11   r-- 28.7   all / all
Domain-0 0 22   r-- 28.7   all / all
Domain-0 0 33   r-- 28.6   all / all
Domain-0 0 44   r-- 28.6   all / all

I.e., you have 6 CPUs, you have only dom0, dom0 has 5 vCPUs and you are
not using dom0_vcpus_pin.

The NULL scheduler has put d0v0 on pCPU 0. And d0v0 is the only vCPU
that can run on pCPU 0, despite its affinities being "all"... because
it's what the NULL scheduler does for you and it's the reason why one
uses it! :-)

Similarly, it has put d0v1 on pCPU 1, d0v2 on pCPU 2, d0v3 on pCPU 3
and d0v4 on pCPU 4. And the "exclusivity guarantee" exaplained above
for d0v0 and pCPU 0, applies to all these other vCPUs and pCPUs as
well.

With no affinity being specified, which vCPU is assigned to which pCPU
is entirely under the NULL scheduler control. It has its heuristics
inside, to try to do that in a smart way, but that's an
internal/implementation detail and is not relevant here.

If you now create a domU with 1 vCPU, that vCPU will be assigned to
pCPU 5.

Now, let's say that, for whatever reason, you absolutely want that d0v2
to run on pCPU 5, instead of being assigned and run on pCPU 2 (which is
what the NULL scheduler decided to pick for it). Well, what you do is
use xl, set the affinity of d0v2 to pCPU 5, and you will get something
like this as a result:

NameID  VCPU   CPU State Time(s) Affinity (Hard 
/ Soft)
Domain-0 0 00   r-- 29.4   all / all
Domain-0 0 11   r-- 28.7   all / all
Domain-0 0 25   r-- 28.7 5 / all
Domain-0 0 33   r-- 28.6   all / all
Domain-0 0 44   r-- 28.6   all / all

So, affinity is indeed useful, even when using NULL, if you want to
diverge from the default behavior and enact a certain policy, maybe due
to the nature of your workload, the characteristics of your hardware,
or whatever.

It is not, however, necessary to set the affinity to:
  - have a vCPU to always stay on one --and always the same one too--
pCPU;
  - avoid that any other vCPU would ever run on that pCPU.

That is guaranteed by the NULL scheduler itself. It just 

[libvirt test] 158878: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158878 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158878/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 151777
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  35d5b26aa433bd33f4b33be3dbb67313357f97f9
baseline version:
 libvirt  2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  206 days
Failing since151818  2020-07-11 04:18:52 Z  205 days  200 attempts
Testing same since   158878  2021-02-01 04:18:44 Z0 days1 attempts


People who touched revisions under test:
  Adolfo Jayme Barrientos 
  Aleksandr Alekseev 
  Andika Triwidada 
  Andrea Bolognani 
  Balázs Meskó 
  Barrett Schonefeld 
  Bastien Orivel 
  Bihong Yu 
  Binfeng Wu 
  Boris Fiuczynski 
  Brian Turek 
  Christian Ehrhardt 
  Christian Schoenebeck 
  Cole Robinson 
  Collin Walling 
  Cornelia Huck 
  Cédric Bosdonnat 
  Côme Borsoi 
  Daniel Henrique Barboza 
  Daniel Letai 
  Daniel P. Berrange 
  Daniel P. Berrangé 
  Dmytro Linkin 
  Eiichi Tsukata 
  Erik Skultety 
  Fabian Affolter 
  Fabian Freyer 
  Fangge Jin 
  Farhan Ali 
  Fedora Weblate Translation 
  Guoyi Tu
  Göran Uddeborg 
  Halil Pasic 
  Han Han 
  Hao Wang 
  Helmut Grohne 
  Ian Wienand 
  Jamie Strandboge 
  Jamie Strandboge 
  Jan Kuparinen 
  Jean-Baptiste Holcroft 
  Jianan Gao 
  Jim Fehlig 
  Jin Yan 
  Jiri Denemark 
  John Ferlan 
  Jonathan Watt 
  Jonathon Jongsma 
  Julio Faracco 
  Ján Tomko 
  Kashyap Chamarthy 
  Kevin Locke 
  Laine Stump 
  Laszlo Ersek 
  Liao Pingfang 
  Lin Ma 
  Lin Ma 
  Lin Ma 
  Marc Hartmayer 
  Marc-André Lureau 
  Marek Marczykowski-Górecki 
  Markus Schade 
  Martin Kletzander 
  Masayoshi Mizuma 
  Matt Coleman 
  Matt Coleman 
  Mauro Matteo Cascella 
  Meina Li 
  Michal Privoznik 
  Michał Smyk 
  Milo Casagrande 
  Moshe Levi 
  Muha Aliss 
  Neal Gompa 
  Nick Shyrokovskiy 
  Nickys Music Group 
  Nico Pache 
  Nikolay Shirokovskiy 
  Olaf Hering 
  Olesya Gerasimenko 
  Orion Poplawski 
  Patrick Magauran 
  Paulo de Rezende Pinatti 
  Pavel Hrdina 
  Peter Krempa 
  Pino Toscano 
  Pino Toscano 
  Piotr Drąg 
  Prathamesh Chavan 
  Ricky Tigg 
  Roman Bogorodskiy 
  Roman Bolshakov 
  Ryan Gahagan 
  Ryan Schmidt 
  Sam Hartman 
  Scott Shambarger 
  Sebastian Mitterle 
  Shalini Chellathurai Saroja 
  Shaojun Yang 
  Shi Lei 
  Simon Gaiser 
  Stefan Bader 
  Stefan Berger 
  Szymon Scholz 
  Thomas Huth 
  Tim Wiederhake 
  Tomáš Golembiovský 
  Tomáš Janoušek 
  Tuguoyi 
  Wang Xin 
  Weblate 
  Yang Hang 
  Yanqiu Zhang 
  Yi Li 
  Yi Wang 
  Yuri Chornoivan 
  Zheng Chuan 
  zhenwei pi 
  Zhenyu Zheng 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-arm64-libvirt  fail
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvops 

Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Elliott Mitchell
On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> With rpi-4.19.y kernel and dtbs
> (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> previous error is not present. I get the boot log on the serial with
> just console=hvc0 from dom0 but the kernel ends up in a panic down the
> line:

> This seems to have been caused by a monitor being attached to the HDMI
> port, with HDMI unplugged dom0 boots OK.

The balance of reports seem to suggest 5.10 is the way to go if you want
graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
RP4.


On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell  wrote:
> >
> > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell  
> > > wrote:
> > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > point to that last being touched last year.  Their tree is at
> > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > >
> > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > amount of work that went into fixing Xen on RPI4, which got merged
> > > into 5.9 and I would like to be able to build upstream everything
> > > without the custom patches coming with the rpixen script repo.
> >
> > Please keep track of where your kernel source is checked out at since
> > there was a desire to figure out what was going on with the device-trees.
> >
> >
> > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > kernel command-line should ensure you get output from the kernel if it
> > manages to start (yes, Linux does support having multiple consoles at the
> > same time).
> 
> No output from dom0 received even with the added console options
> (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> with 4.19 next.

So, their current HEAD.  This reads like you've got a problematic kernel
configuration.  What procedure are you following to generate the
configuration you use?

Using their upstream as a base and then adding the configuration options
for Xen has worked fairly well for me (`make bcm2711_defconfig`,
`make menuconfig`, `make zImage`).

Notably the options:
CONFIG_PARAVIRT
CONFIG_XEN_DOM0
CONFIG_XEN
CONFIG_XEN_BLKDEV_BACKEND
CONFIG_XEN_NETDEV_BACKEND
CONFIG_HVC_XEN
CONFIG_HVC_XEN_FRONTEND

Should be set to "y".


-- 
(\___(\___(\__  --=> 8-) EHM <=--  __/)___/)___/)
 \BS (| ehem+sig...@m5p.com  PGP 87145445 |)   /
  \_CS\   |  _  -O #include  O-   _  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





[linux-5.4 test] 158863: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158863 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158863/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-startfail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start  fail REGR. vs. 158387
 test-arm64-arm64-xl  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start   fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start   fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-startfail REGR. vs. 158387
 test-amd64-amd64-xl-xsm  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-xl   14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-libvirt  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-pair25 guest-start/debian   fail REGR. vs. 158387
 test-amd64-amd64-xl  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-libvirt 14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-xl-xsm   14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian  fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-pair 25 guest-start/debian   fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian   fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install   fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-pygrub  12 debian-di-installfail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-installfail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-arm64-arm64-xl-credit1  14 guest-start  fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm  14 guest-start  fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start  fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail 
REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. 
vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail 
REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow212 debian-di-installfail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail 
REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install 
fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail 
REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 158387

[qemu-mainline test] 158860: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158860 qemu-mainline real [real]
flight 158876 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/158860/
http://logs.test-lab.xenproject.org/osstest/logs/158876/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 152631
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu74208cd252c5da9d867270a178799abd802b9338
baseline version:
 qemuu1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  164 days
Failing since152659  2020-08-21 14:07:39 Z  163 days  332 attempts
Testing same since   158816  2021-01-30 13:16:09 Z1 days3 attempts


372 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm 

Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Tamas K Lengyel
On Sun, Jan 31, 2021 at 9:43 PM Tamas K Lengyel
 wrote:
>
> On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell  wrote:
> >
> > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell  
> > > wrote:
> > > >
> > > > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > > > (XEN) Unable to retrieve address 0 for 
> > > > > /scb/pcie@7d50/pci@1,0/usb@1,0
> > > > > (XEN) Device tree generation failed (-22).
> > > >
> > > > > Does anyone have an idea what might be going wrong here? I tried
> > > > > building the dtb without using the dtb overlay but it didn't seem to
> > > > > do anything.
> > > >
> > > > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > > > replace the "return res;" with "continue;" that will bypass the issue.
> > > > The 3 people I'm copying on this message though may wish to ask 
> > > > questions
> > > > about the state of your build tree.
> > >
> > > I'll try that but it's a pretty hacky work-around ;)
> >
> > Actually no, it simply causes Xen to ignore these entries.  The patch
> > I've got ready to submit to this list also adjusts the error message to
> > avoid misinterpretation, but does pretty well exactly this.
> >
> > My only concern is whether it should ignore the entries only for Domain 0
> > or should always ignore them.
> >
> >
> > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > point to that last being touched last year.  Their tree is at
> > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > >
> > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > amount of work that went into fixing Xen on RPI4, which got merged
> > > into 5.9 and I would like to be able to build upstream everything
> > > without the custom patches coming with the rpixen script repo.
> >
> > Please keep track of where your kernel source is checked out at since
> > there was a desire to figure out what was going on with the device-trees.
> >
> >
> > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > kernel command-line should ensure you get output from the kernel if it
> > manages to start (yes, Linux does support having multiple consoles at the
> > same time).
>
> No output from dom0 received even with the added console options
> (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> with 4.19 next.

With rpi-4.19.y kernel and dtbs
(cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
previous error is not present. I get the boot log on the serial with
just console=hvc0 from dom0 but the kernel ends up in a panic down the
line:

(XEN) traps.c:1983:d0v0 HSR=0x93860046 pc=0xff80085ac97c
gva=0xff800b096000 gpa=0x003e33
[1.242863] Unhandled fault at 0xff800b096000
[1.242871] Mem abort info:
[1.242879]   ESR = 0x9600
[1.242893]   Exception class = DABT (current EL), IL = 32 bits
[1.242922]   SET = 0, FnV = 0
[1.242928]   EA = 0, S1PTW = 0
[1.242934] Data abort info:
[1.242941]   ISV = 0, ISS = 0x
[1.242948]   CM = 0, WnR = 0
[1.242958] swapper pgtable: 4k pages, 39-bit VAs, pgdp = (ptrval)
[1.242965] [ff800b096000] pgd=33ffe003,
pud=33ffe003, pmd=3230a003, pte=00683e33070f
[1.242989] Internal error: ttbr address size fault: 9600 [#1]
PREEMPT SMP
[1.242995] Modules linked in:
[1.243005] Process swapper/0 (pid: 1, stack limit = 0x(ptrval))
[1.243014] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.127-v8+ #1
[1.243019] Hardware name: Raspberry Pi 4 Model B Rev 1.1 (DT)
[1.243026] pstate: 2005 (nzCv daif -PAN -UAO)
[1.243044] pc : cfb_imageblit+0x58c/0x820
[1.243054] lr : bcm2708_fb_imageblit+0x2c/0x40
[1.243059] sp : ff800802b4e0
[1.243063] x29: ff800802b4e0 x28: 
[1.243073] x27: 0010 x26: ffc03212c000
[1.243081] x25: 0020 x24: ffc0322c7d80
[1.243088] x23: 0008 x22: ffc03212a118
[1.243095] x21:  x20: ff800b096000
[1.243102] x19:  x18: fffc
[1.243109] x17:  x16: ff800b096000
[1.243116] x15: 0001 x14: 1e00
[1.243124] x13: 0010 x12: 
[1.243131] x11: 0020 x10: 0001
[1.243138] x9 : 0008 x8 : ff800b096020
[1.243145] x7 : ffc03212c001 x6 : 
[1.243152] x5 : ff80089e2f78 x4 : 
[1.243159] x3 : ff800b096000 x2 : ffc03212c000
[1.243166] x1 :  x0 : 
[1.243173] Call trace:
[1.243182]  cfb_imageblit+0x58c/0x820
[1.243190]  bcm2708_fb_imageblit+0x2c/0x40
[1.243197]  soft_cursor+0x16c/0x200
[1.243204]  

Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Tamas K Lengyel
On Sun, Jan 31, 2021 at 9:43 PM Tamas K Lengyel
 wrote:
>
> On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell  wrote:
> >
> > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell  
> > > wrote:
> > > >
> > > > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > > > (XEN) Unable to retrieve address 0 for 
> > > > > /scb/pcie@7d50/pci@1,0/usb@1,0
> > > > > (XEN) Device tree generation failed (-22).
> > > >
> > > > > Does anyone have an idea what might be going wrong here? I tried
> > > > > building the dtb without using the dtb overlay but it didn't seem to
> > > > > do anything.
> > > >
> > > > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > > > replace the "return res;" with "continue;" that will bypass the issue.
> > > > The 3 people I'm copying on this message though may wish to ask 
> > > > questions
> > > > about the state of your build tree.
> > >
> > > I'll try that but it's a pretty hacky work-around ;)
> >
> > Actually no, it simply causes Xen to ignore these entries.  The patch
> > I've got ready to submit to this list also adjusts the error message to
> > avoid misinterpretation, but does pretty well exactly this.
> >
> > My only concern is whether it should ignore the entries only for Domain 0
> > or should always ignore them.
> >
> >
> > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > point to that last being touched last year.  Their tree is at
> > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > >
> > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > amount of work that went into fixing Xen on RPI4, which got merged
> > > into 5.9 and I would like to be able to build upstream everything
> > > without the custom patches coming with the rpixen script repo.
> >
> > Please keep track of where your kernel source is checked out at since
> > there was a desire to figure out what was going on with the device-trees.
> >
> >
> > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > kernel command-line should ensure you get output from the kernel if it
> > manages to start (yes, Linux does support having multiple consoles at the
> > same time).
>
> No output from dom0 received even with the added console options
> (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> with 4.19 next.

The dtb overlay is giving me the following error with both 4.19 and 5.10:

arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: node name is not "pci" or "pcie"
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: missing ranges for PCI bridge (or not a
bridge)
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: incorrect #address-cells for PCI bridge
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: incorrect #size-cells for PCI bridge
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning
(pci_device_bus_num): Failed prerequisite 'pci_bridge'

The overlays are defined in
https://github.com/dornerworks/xen-rpi4-builder/blob/master/patches/linux/0001-Add-Xen-overlay-for-the-Pi-4.patch
as:

+/dts-v1/;
+/plugin/;
+
+/ {
+ compatible = "brcm,bcm2711";
+
+ fragment@0 {
+ target-path = "/chosen";
+ __overlay__ {
+ #address-cells = <0x1>;
+ #size-cells = <0x1>;
+ xen,xen-bootargs = "console=dtuart dtuart=/soc/serial@7e215040
sync_console dom0_mem=512M dom0_mem=512M bootscrub=0";
+
+ dom0 {
+ compatible = "xen,linux-zimage", "xen,multiboot-module";
+ reg = <0x0040 0x0100>;
+ };
+ };
+ };
+
+ fragment@1 {
+ target-path = "/scb/pcie@7d50";
+ __overlay__ {
+ device_type = "pci";
+ };
+ };
+};
+// Xen configuration for Pi 4

Don't really know what those warnings mean or how to fix them but
perhaps they are relevant to why Xen also complains?

Tamas



Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Tamas K Lengyel
On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell  wrote:
>
> On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell  wrote:
> > >
> > > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > > (XEN) Unable to retrieve address 0 for 
> > > > /scb/pcie@7d50/pci@1,0/usb@1,0
> > > > (XEN) Device tree generation failed (-22).
> > >
> > > > Does anyone have an idea what might be going wrong here? I tried
> > > > building the dtb without using the dtb overlay but it didn't seem to
> > > > do anything.
> > >
> > > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > > replace the "return res;" with "continue;" that will bypass the issue.
> > > The 3 people I'm copying on this message though may wish to ask questions
> > > about the state of your build tree.
> >
> > I'll try that but it's a pretty hacky work-around ;)
>
> Actually no, it simply causes Xen to ignore these entries.  The patch
> I've got ready to submit to this list also adjusts the error message to
> avoid misinterpretation, but does pretty well exactly this.
>
> My only concern is whether it should ignore the entries only for Domain 0
> or should always ignore them.
>
>
> > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > point to that last being touched last year.  Their tree is at
> > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> >
> > I've moved the Linux branch up to 5.10 because there had been a fair
> > amount of work that went into fixing Xen on RPI4, which got merged
> > into 5.9 and I would like to be able to build upstream everything
> > without the custom patches coming with the rpixen script repo.
>
> Please keep track of where your kernel source is checked out at since
> there was a desire to figure out what was going on with the device-trees.
>
>
> Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> kernel command-line should ensure you get output from the kernel if it
> manages to start (yes, Linux does support having multiple consoles at the
> same time).

No output from dom0 received even with the added console options
(+earlyprintk=xen). The kernel build was from rpi-5.10.y
c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
with 4.19 next.

Tamas



Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Elliott Mitchell
On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell  wrote:
> >
> > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d50/pci@1,0/usb@1,0
> > > (XEN) Device tree generation failed (-22).
> >
> > > Does anyone have an idea what might be going wrong here? I tried
> > > building the dtb without using the dtb overlay but it didn't seem to
> > > do anything.
> >
> > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > replace the "return res;" with "continue;" that will bypass the issue.
> > The 3 people I'm copying on this message though may wish to ask questions
> > about the state of your build tree.
> 
> I'll try that but it's a pretty hacky work-around ;)

Actually no, it simply causes Xen to ignore these entries.  The patch
I've got ready to submit to this list also adjusts the error message to
avoid misinterpretation, but does pretty well exactly this.

My only concern is whether it should ignore the entries only for Domain 0
or should always ignore them.


> > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > point to that last being touched last year.  Their tree is at
> > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> 
> I've moved the Linux branch up to 5.10 because there had been a fair
> amount of work that went into fixing Xen on RPI4, which got merged
> into 5.9 and I would like to be able to build upstream everything
> without the custom patches coming with the rpixen script repo.

Please keep track of where your kernel source is checked out at since
there was a desire to figure out what was going on with the device-trees.


Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
kernel command-line should ensure you get output from the kernel if it
manages to start (yes, Linux does support having multiple consoles at the
same time).


-- 
(\___(\___(\__  --=> 8-) EHM <=--  __/)___/)___/)
 \BS (| ehem+sig...@m5p.com  PGP 87145445 |)   /
  \_CS\   |  _  -O #include  O-   _  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Tamas K Lengyel
On Sun, Jan 31, 2021 at 6:50 PM Tamas K Lengyel
 wrote:
>
> On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell  wrote:
> >
> > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d50/pci@1,0/usb@1,0
> > > (XEN) Device tree generation failed (-22).
> >
> > > Does anyone have an idea what might be going wrong here? I tried
> > > building the dtb without using the dtb overlay but it didn't seem to
> > > do anything.
> >
> > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > replace the "return res;" with "continue;" that will bypass the issue.
> > The 3 people I'm copying on this message though may wish to ask questions
> > about the state of your build tree.
>
> I'll try that but it's a pretty hacky work-around ;)

That change got Xen to continue but then I don't see any outfrom from
dom0 afterwards and the system just hangs:

(XEN) *** LOADING DOMAIN 0 ***
(XEN) Loading d0 kernel from boot module @ 0048
(XEN) Allocating 1:1 mappings totalling 512MB for dom0:
(XEN) BANK[0] 0x001000-0x002800 (384MB)
(XEN) BANK[1] 0x003000-0x003800 (128MB)
(XEN) Grant table range: 0x20-0x24
(XEN) Unable to retrieve address 0 for /scb/pcie@7d50/pci@1,0/usb@1,0
(XEN) Allocating PPI 16 for event channel interrupt
(XEN) Loading zImage from 0048 to 1000-10f8
(XEN) Loading d0 DTB to 0x1800-0x1800bde9
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) ***
(XEN) WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) This option is intended to aid debugging of Xen by ensuring
(XEN) that all output is synchronously delivered on the serial line.
(XEN) However it can introduce SIGNIFICANT latencies and affect
(XEN) timekeeping. It is NOT recommended for production use!
(XEN) ***
(XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
(XEN) Please update your firmware.
(XEN) ***
(XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
(XEN) Please update your firmware.
(XEN) ***
(XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
(XEN) Please update your firmware.
(XEN) ***
(XEN) 3... 2... 1...
(XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
(XEN) Freed 352kB init memory.

Tamas



[linux-5.4 bisection] complete test-amd64-coresched-amd64-xl

2021-01-31 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-coresched-amd64-xl
testid guest-start

Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158870/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse 
  Date:   Wed Jan 13 13:26:02 2021 +
  
  xen: Fix event channel callback via INTX/GSI
  
  [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
  
  For a while, event channel notification via the PCI platform device
  has been broken, because we attempt to communicate with xenstore before
  we even have notifications working, with the xs_reset_watches() call
  in xs_init().
  
  We tend to get away with this on Xen versions below 4.0 because we avoid
  calling xs_reset_watches() anyway, because xenstore might not cope with
  reading a non-existent key. And newer Xen *does* have the vector
  callback support, so we rarely fall back to INTX/GSI delivery.
  
  To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
  startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
  case, deferring it to be called from xenbus_probe() in the XS_HVM case
  instead.
  
  Then fix up the invocation of xenbus_probe() to happen either from its
  device_initcall if the callback is available early enough, or when the
  callback is finally set up. This means that the hack of calling
  xenbus_probe() from a workqueue after the first interrupt, or directly
  from the PCI platform device setup, is no longer needed.
  
  Signed-off-by: David Woodhouse 
  Reviewed-by: Boris Ostrovsky 
  Link: 
https://lore.kernel.org/r/20210113132606.422794-2-dw...@infradead.org
  Signed-off-by: Juergen Gross 
  Signed-off-by: Sasha Levin 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-amd64-coresched-amd64-xl.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-5.4/test-amd64-coresched-amd64-xl.guest-start
 --summary-out=tmp/158870.bisection-summary --basis-template=158387 
--blessings=real,real-bisect,real-retry linux-5.4 test-amd64-coresched-amd64-xl 
guest-start
Searching for failure / basis pass:
 158841 fail [host=pinot0] / 158681 [host=pinot1] 158624 ok.
Failure / basis pass flights: 158841 / 158624
(tree with no url: minios)
Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0fbca6ce4174724f28be5268c5d210f51ed96e31 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c6be6dab9c4bdf135bc02b61ecc304d5511c3588 
3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 
7ea428895af2840d85c524f0bd11a38aac308308 
ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 
9dc687f155a57216b83b17f9cde55dd43e06b0cd
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 
3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 
7ea428895af2840d85c524f0bd11a38aac308308 
ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 
452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-0fbca6ce4174724f28be5268c5d210f51ed96e31
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/osstest/ovmf.git#96a9acfc527964dc5ab7298862a0cd8aa5fffc6a-c6be6dab9c4bdf135bc02b61ecc304d5511c3588
 git://xenbits.xen.org/qemu-xen-traditional\
 
.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764
 
git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308
 

Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Tamas K Lengyel
On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell  wrote:
>
> On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > (XEN) Unable to retrieve address 0 for /scb/pcie@7d50/pci@1,0/usb@1,0
> > (XEN) Device tree generation failed (-22).
>
> > Does anyone have an idea what might be going wrong here? I tried
> > building the dtb without using the dtb overlay but it didn't seem to
> > do anything.
>
> If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> replace the "return res;" with "continue;" that will bypass the issue.
> The 3 people I'm copying on this message though may wish to ask questions
> about the state of your build tree.

I'll try that but it's a pretty hacky work-around ;)

>
> Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> point to that last being touched last year.  Their tree is at
> cc39f1c9f82f6fe5a437836811d906c709e0661c.

I've moved the Linux branch up to 5.10 because there had been a fair
amount of work that went into fixing Xen on RPI4, which got merged
into 5.9 and I would like to be able to build upstream everything
without the custom patches coming with the rpixen script repo.

Tamas



Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Elliott Mitchell
On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> (XEN) Unable to retrieve address 0 for /scb/pcie@7d50/pci@1,0/usb@1,0
> (XEN) Device tree generation failed (-22).

> Does anyone have an idea what might be going wrong here? I tried
> building the dtb without using the dtb overlay but it didn't seem to
> do anything.

If you go to line 1412 of the file xen/arch/arm/domain_build.c and
replace the "return res;" with "continue;" that will bypass the issue.
The 3 people I'm copying on this message though may wish to ask questions
about the state of your build tree.

Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
point to that last being touched last year.  Their tree is at
cc39f1c9f82f6fe5a437836811d906c709e0661c.


-- 
(\___(\___(\__  --=> 8-) EHM <=--  __/)___/)___/)
 \BS (| ehem+sig...@m5p.com  PGP 87145445 |)   /
  \_CS\   |  _  -O #include  O-   _  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





Re: Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Nataliya Korovkina
Hi Tamas,

I had another problem with device tree built with this script (rpixen.sh)...

No promises, but it's worth trying on clean tree:
make O=.build-arm64 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j $(nproc) dtbs
(instead of broadcom/${DTBFILE})

Good luck,
Nataliya

On Sun, Jan 31, 2021 at 2:07 PM Tamas K Lengyel
 wrote:
>
> Hi all,
> I'm trying to boot Xen 4.14.1 on my RPI4 with the 5.10 kernel, built
> using https://github.com/tklengyel/xen-rpi4-builder/tree/update.
> Everything builds fine and Xen boots but then I get this error:
>
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Loading d0 kernel from boot module @ 0048
> (XEN) Allocating 1:1 mappings totalling 2048MB for dom0:
> (XEN) BANK[0] 0x000800-0x002800 (512MB)
> (XEN) BANK[1] 0x003000-0x003800 (128MB)
> (XEN) BANK[2] 0x008000-0x00c000 (1024MB)
> (XEN) BANK[3] 0x00d800-0x00f000 (384MB)
> (XEN) Grant table range: 0x20-0x24
> (XEN) Unable to retrieve address 0 for /scb/pcie@7d50/pci@1,0/usb@1,0
> (XEN) Device tree generation failed (-22).
> (XEN)
> (XEN) 
> (XEN) Panic on CPU 0:
> (XEN) Could not set up DOM0 guest OS
> (XEN) 
> (XEN)
> (XEN) Reboot in five seconds...
>
>
> Does anyone have an idea what might be going wrong here? I tried
> building the dtb without using the dtb overlay but it didn't seem to
> do anything.
>
> Thanks,
> Tamas
>



[linux-linus test] 158848: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158848 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158848/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine   6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt   7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-libvirt-xsm   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-raw7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-shadow 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-amd64-xl  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start   fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-libvirt 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian  fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot fail REGR. vs. 152332
 test-amd64-amd64-pair25 guest-start/debian   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-examine 13 examine-iommufail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop   fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt 14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 

[linux-5.4 test] 158841: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158841 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-startfail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-startfail REGR. vs. 158387
 test-arm64-arm64-xl  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start   fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start   fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-startfail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-xl   14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-libvirt  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-pair25 guest-start/debian   fail REGR. vs. 158387
 test-amd64-amd64-xl-xsm  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-xl  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-libvirt 14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-xl-xsm   14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian  fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-pair 25 guest-start/debian   fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start  fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian   fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install   fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-pygrub  12 debian-di-installfail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-installfail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-arm64-arm64-xl-credit1  14 guest-start  fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm  14 guest-start  fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start  fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail 
REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. 
vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail 
REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow212 debian-di-installfail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail 
REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 
158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install 
fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install 
fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail 
REGR. vs. 158387

Xen 4.14.1 on RPI4: device tree generation failed

2021-01-31 Thread Tamas K Lengyel
Hi all,
I'm trying to boot Xen 4.14.1 on my RPI4 with the 5.10 kernel, built
using https://github.com/tklengyel/xen-rpi4-builder/tree/update.
Everything builds fine and Xen boots but then I get this error:

(XEN) *** LOADING DOMAIN 0 ***
(XEN) Loading d0 kernel from boot module @ 0048
(XEN) Allocating 1:1 mappings totalling 2048MB for dom0:
(XEN) BANK[0] 0x000800-0x002800 (512MB)
(XEN) BANK[1] 0x003000-0x003800 (128MB)
(XEN) BANK[2] 0x008000-0x00c000 (1024MB)
(XEN) BANK[3] 0x00d800-0x00f000 (384MB)
(XEN) Grant table range: 0x20-0x24
(XEN) Unable to retrieve address 0 for /scb/pcie@7d50/pci@1,0/usb@1,0
(XEN) Device tree generation failed (-22).
(XEN)
(XEN) 
(XEN) Panic on CPU 0:
(XEN) Could not set up DOM0 guest OS
(XEN) 
(XEN)
(XEN) Reboot in five seconds...


Does anyone have an idea what might be going wrong here? I tried
building the dtb without using the dtb overlay but it didn't seem to
do anything.

Thanks,
Tamas



Re: [PATCH v2 1/4] meson: Do not build Xen x86_64-softmmu on Aarch64

2021-01-31 Thread Philippe Mathieu-Daudé
On 1/31/21 3:45 PM, andrew.cooper3--- via wrote:
> On 31/01/2021 14:18, Philippe Mathieu-Daudé wrote:
>> The Xen on ARM documentation only mentions the i386-softmmu
>> target. As the x86_64-softmmu doesn't seem used, remove it
>> to avoid wasting cpu cycles building it.
>>
>> Signed-off-by: Philippe Mathieu-Daudé 
> 
> As far as I understand, it only gets used at all on ARM for the
> blkback=>qcow path, and has nothing to do with I440FX or other boards. 
> i.e. it is a paravirt disk and nothing else.

Yeah the PIIX3 part is messy, this is easier to select I440FX which
provides all the required dependencies. TBH I'd rather invest my
time in other tasks, and the Xen folks don't seem interested in getting
this improved. I only did that series to reply to Paolo and pass over
to Alex Bennée.

> xenpv should not be tied to i386-softmmu in the first place, and would
> remove a very-WTF-worthy current state of things.  That said, I have no
> idea how much effort that might be.

Here the problem isn't much Xen but the rest of x86 machines in QEMU.

Regards,

Phil.



Re: [PATCH] x86/pod: Do not fragment PoD memory allocations

2021-01-31 Thread Elliott Mitchell
On Thu, Jan 28, 2021 at 10:42:27PM +, George Dunlap wrote:
> 
> > On Jan 28, 2021, at 6:26 PM, Elliott Mitchell  wrote:
> > type = "hvm"
> > memory = 1024
> > maxmem = 1073741824
> > 
> > I suspect maxmem > free Xen memory may be sufficient.  The instances I
> > can be certain of have been maxmem = total host memory *7.
> 
> Can you include your Xen version and dom0 command-line?

> This is on staging-4.14 from a month or two ago (i.e., what I happened to 
> have on a random test  box), and `dom0_mem=1024M,max:1024M` in my 
> command-line.  That rune will give dom0 only 1GiB of RAM, but also prevent it 
> from auto-ballooning down to free up memory for the guest.
> 

As this is a server, not a development target, Debian's build of 4.11 is
in use.  Your domain 0 memory allocation is extremely generous compared
to mine.  One thing which is on the command-line though is
"watchdog=true".

I've got 3 candidates which presently concern me:ble:

1> There is a limited range of maxmem values where this occurs.  Perhaps
1TB is too high on your machine for the problem to reproduce.  As
previously stated my sample configuration has maxmem being roughly 7
times actual machine memory.

2> Between issuing the `xl create` command and the machine rebooting a
few moments of slow response have been observed.  Perhaps the memory
allocator loop is hogging processor cores long enough for the watchdog to
trigger?

3> Perhaps one of the patches on Debian broke things?  This seems
unlikely since nearly all of Debian's patches are either strictly for
packaging or else picks from Xen's main branch, but this is certainly
possible.


-- 
(\___(\___(\__  --=> 8-) EHM <=--  __/)___/)___/)
 \BS (| ehem+sig...@m5p.com  PGP 87145445 |)   /
  \_CS\   |  _  -O #include  O-   _  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





[qemu-mainline test] 158839: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158839 qemu-mainline real [real]
flight 158857 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/158839/
http://logs.test-lab.xenproject.org/osstest/logs/158857/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 152631
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu74208cd252c5da9d867270a178799abd802b9338
baseline version:
 qemuu1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  164 days
Failing since152659  2020-08-21 14:07:39 Z  163 days  331 attempts
Testing same since   158816  2021-01-30 13:16:09 Z1 days2 attempts


372 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm 

Re: Problems starting Xen domU after latest stable update

2021-01-31 Thread Jason Andryuk
On Sat, Jan 30, 2021 at 6:25 PM Marek Marczykowski-Górecki
 wrote:
>
> On Fri, Jan 29, 2021 at 03:16:52PM +0100, Jürgen Groß wrote:
> > On 29.01.21 15:13, Michael Labriola wrote:
> > > On Fri, Jan 29, 2021 at 12:26 AM Jürgen Groß  wrote:
> > > > If the buggy patch has been put into stable this Fixes: tag should
> > > > result in the fix being put into the same stable branches as well.
> > >
> > > I've never done this before...  does this happen automatically?  Or is
> > > there somebody we should ping to make sure it happens?
> >
> > This happens automatically (I think).
> >
> > I have seen mails for the patch been taken for 4.14, 4.19, 5.4 and 5.10.
>
> Hmm, I can't find it in LKML archive, nor stable@ archive. And also it
> isn't included in 5.10.12 released yesterday, nor included in
> queue/5.10 [1]. Are you sure it wasn't lost somewhere in the meantime?

It probably would have gotten in eventually, but I made the explicit
request to move things along.

-Jason



Re: [PATCH v2 1/4] meson: Do not build Xen x86_64-softmmu on Aarch64

2021-01-31 Thread Andrew Cooper
On 31/01/2021 14:18, Philippe Mathieu-Daudé wrote:
> The Xen on ARM documentation only mentions the i386-softmmu
> target. As the x86_64-softmmu doesn't seem used, remove it
> to avoid wasting cpu cycles building it.
>
> Signed-off-by: Philippe Mathieu-Daudé 

As far as I understand, it only gets used at all on ARM for the
blkback=>qcow path, and has nothing to do with I440FX or other boards. 
i.e. it is a paravirt disk and nothing else.

xenpv should not be tied to i386-softmmu in the first place, and would
remove a very-WTF-worthy current state of things.  That said, I have no
idea how much effort that might be.

~Andrew



[PATCH v2 4/4] hw/xen: Have Xen machines select 9pfs

2021-01-31 Thread Philippe Mathieu-Daudé
9pfs is not an accelerator feature but a machine one,
move the selection on the machine Kconfig (in hw/).

Signed-off-by: Philippe Mathieu-Daudé 
---
 accel/Kconfig   | 1 -
 hw/i386/xen/Kconfig | 1 +
 hw/xen/Kconfig  | 1 +
 3 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/accel/Kconfig b/accel/Kconfig
index 461104c7715..b9e9a2d35b0 100644
--- a/accel/Kconfig
+++ b/accel/Kconfig
@@ -15,4 +15,3 @@ config KVM
 
 config XEN
 bool
-select FSDEV_9P if VIRTFS
diff --git a/hw/i386/xen/Kconfig b/hw/i386/xen/Kconfig
index ad9d774b9ea..4affd842f28 100644
--- a/hw/i386/xen/Kconfig
+++ b/hw/i386/xen/Kconfig
@@ -3,3 +3,4 @@ config XEN_FV
 default y if XEN
 depends on XEN
 select I440FX
+select FSDEV_9P if VIRTFS
diff --git a/hw/xen/Kconfig b/hw/xen/Kconfig
index 0b8427d6bd1..825277969fa 100644
--- a/hw/xen/Kconfig
+++ b/hw/xen/Kconfig
@@ -5,3 +5,4 @@ config XEN_PV
 select PCI
 select USB
 select IDE_PIIX
+select FSDEV_9P if VIRTFS
-- 
2.26.2




[PATCH v2 3/4] hw/xen/Kconfig: Introduce XEN_PV config

2021-01-31 Thread Philippe Mathieu-Daudé
xenpv machine requires USB, IDE_PIIX and PCI:

  /usr/bin/ld:
  libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function 
`xen_be_register_common':
  hw/xen/xen-legacy-backend.c:757: undefined reference to `xen_usb_ops'
  libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function 
`unplug_disks':
  hw/i386/xen/xen_platform.c:153: undefined reference to 
`pci_piix3_xen_ide_unplug'
  libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function 
`pci_unplug_nics':
  hw/i386/xen/xen_platform.c:137: undefined reference to `pci_for_each_device'
  libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function 
`xen_platform_realize':
  hw/i386/xen/xen_platform.c:483: undefined reference to `pci_register_bar'

Signed-off-by: Philippe Mathieu-Daudé 
---
 hw/Kconfig | 1 +
 hw/xen/Kconfig | 7 +++
 2 files changed, 8 insertions(+)
 create mode 100644 hw/xen/Kconfig

diff --git a/hw/Kconfig b/hw/Kconfig
index 5ad3c6b5a4b..f2a95591d94 100644
--- a/hw/Kconfig
+++ b/hw/Kconfig
@@ -39,6 +39,7 @@ source usb/Kconfig
 source virtio/Kconfig
 source vfio/Kconfig
 source watchdog/Kconfig
+source xen/Kconfig
 
 # arch Kconfig
 source arm/Kconfig
diff --git a/hw/xen/Kconfig b/hw/xen/Kconfig
new file mode 100644
index 000..0b8427d6bd1
--- /dev/null
+++ b/hw/xen/Kconfig
@@ -0,0 +1,7 @@
+config XEN_PV
+bool
+default y if XEN
+depends on XEN
+select PCI
+select USB
+select IDE_PIIX
-- 
2.26.2




[PATCH v2 2/4] hw/i386/xen: Introduce XEN_FV Kconfig

2021-01-31 Thread Philippe Mathieu-Daudé
Introduce XEN_FV to differency the machine from the accelerator.

Suggested-by: Paolo Bonzini 
Signed-off-by: Philippe Mathieu-Daudé 
---
 hw/i386/Kconfig | 2 ++
 hw/i386/xen/Kconfig | 5 +
 hw/i386/xen/meson.build | 2 +-
 3 files changed, 8 insertions(+), 1 deletion(-)
 create mode 100644 hw/i386/xen/Kconfig

diff --git a/hw/i386/Kconfig b/hw/i386/Kconfig
index 7f91f30877f..b4c8aa5c242 100644
--- a/hw/i386/Kconfig
+++ b/hw/i386/Kconfig
@@ -1,3 +1,5 @@
+source xen/Kconfig
+
 config SEV
 bool
 depends on KVM
diff --git a/hw/i386/xen/Kconfig b/hw/i386/xen/Kconfig
new file mode 100644
index 000..ad9d774b9ea
--- /dev/null
+++ b/hw/i386/xen/Kconfig
@@ -0,0 +1,5 @@
+config XEN_FV
+bool
+default y if XEN
+depends on XEN
+select I440FX
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index be84130300c..082d0f02cf3 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,4 +1,4 @@
-i386_ss.add(when: 'CONFIG_XEN', if_true: files(
+i386_ss.add(when: 'CONFIG_XEN_FV', if_true: files(
   'xen-hvm.c',
   'xen-mapcache.c',
   'xen_apic.c',
-- 
2.26.2




[PATCH v2 1/4] meson: Do not build Xen x86_64-softmmu on Aarch64

2021-01-31 Thread Philippe Mathieu-Daudé
The Xen on ARM documentation only mentions the i386-softmmu
target. As the x86_64-softmmu doesn't seem used, remove it
to avoid wasting cpu cycles building it.

Signed-off-by: Philippe Mathieu-Daudé 
---
 meson.build | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/meson.build b/meson.build
index f00b7754fd4..97a577a7743 100644
--- a/meson.build
+++ b/meson.build
@@ -74,10 +74,10 @@
 endif
 
 accelerator_targets = { 'CONFIG_KVM': kvm_targets }
-if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
+if cpu in ['arm', 'aarch64']
   # i368 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
-'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
+'CONFIG_XEN': ['i386-softmmu'],
   }
 endif
 if cpu in ['x86', 'x86_64']
@@ -85,6 +85,7 @@
 'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'],
 'CONFIG_HVF': ['x86_64-softmmu'],
 'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
+'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
   }
 endif
 
-- 
2.26.2




[PATCH v2 0/4] hw/xen: Introduce XEN_FV/XEN_PV Kconfig

2021-01-31 Thread Philippe Mathieu-Daudé
Sort the Xen buildsys glue a bit.

v2: Considered Paolo's comments from v1

Supersedes: <20210129194415.3925153-1-f4...@amsat.org>

Philippe Mathieu-Daudé (4):
  meson: Do not build Xen x86_64-softmmu on Aarch64
  hw/i386/xen: Introduce XEN_FV Kconfig
  hw/xen/Kconfig: Introduce XEN_PV config
  hw/xen: Have Xen machines select 9pfs

 meson.build | 5 +++--
 accel/Kconfig   | 1 -
 hw/Kconfig  | 1 +
 hw/i386/Kconfig | 2 ++
 hw/i386/xen/Kconfig | 6 ++
 hw/i386/xen/meson.build | 2 +-
 hw/xen/Kconfig  | 8 
 7 files changed, 21 insertions(+), 4 deletions(-)
 create mode 100644 hw/i386/xen/Kconfig
 create mode 100644 hw/xen/Kconfig

-- 
2.26.2




Re: [RFC PATCH 1/4] hw/ide/Kconfig: IDE_ISA requires ISA_BUS

2021-01-31 Thread Philippe Mathieu-Daudé
On 1/29/21 8:59 PM, Paolo Bonzini wrote:
> On 29/01/21 20:44, Philippe Mathieu-Daudé wrote:
>> hw/ide/ioport.c has a strong dependency on hw/isa/isa-bus.c:
>>
>>    /usr/bin/ld: libcommon.fa.p/hw_ide_ioport.c.o: in function
>> `ide_init_ioport':
>>    /usr/bin/ld: hw/ide/ioport.c:61: undefined reference to
>> `isa_register_portio_list'
>>
>> Signed-off-by: Philippe Mathieu-Daudé 
>> ---
>>   hw/ide/Kconfig | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/hw/ide/Kconfig b/hw/ide/Kconfig
>> index 5d9106b1ac2..41cdd9cbe03 100644
>> --- a/hw/ide/Kconfig
>> +++ b/hw/ide/Kconfig
>> @@ -12,7 +12,7 @@ config IDE_PCI
>>     config IDE_ISA
>>   bool
>> -    depends on ISA_BUS
>> +    select ISA_BUS
>>   select IDE_QDEV
>>     config IDE_PIIX
> 
> This is incorrect.  Buses are "depended on", not selected, and this is
> documented in docs/devel/kconfig.rst.

This is a kludge to deal with the current state of hw/i386/Kconfig.

I tried to clean it twice (mostly because unused things are pulled
in the MIPS targets), but I eventually gave up after accepting the
PC machines are Frankenstein ones built for virtualization, and I've
been told "if it ain't broke, don't fix it".



[xen-unstable test] 158835: tolerable FAIL

2021-01-31 Thread osstest service owner
flight 158835 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158835/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine  4 memdisk-try-append fail pass in 158811

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 158811
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 158811
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 158811
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 158811
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 158811
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 158811
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 158811
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 158811
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 158811
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158811
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 158811
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  9dc687f155a57216b83b17f9cde55dd43e06b0cd
baseline version:
 xen  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158835  2021-01-31 01:51:26 Z0 days
Testing same since  (not found) 0 attempts

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  

[linux-5.4 bisection] complete test-amd64-amd64-xl-multivcpu

2021-01-31 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-multivcpu
testid guest-start

Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158850/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse 
  Date:   Wed Jan 13 13:26:02 2021 +
  
  xen: Fix event channel callback via INTX/GSI
  
  [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
  
  For a while, event channel notification via the PCI platform device
  has been broken, because we attempt to communicate with xenstore before
  we even have notifications working, with the xs_reset_watches() call
  in xs_init().
  
  We tend to get away with this on Xen versions below 4.0 because we avoid
  calling xs_reset_watches() anyway, because xenstore might not cope with
  reading a non-existent key. And newer Xen *does* have the vector
  callback support, so we rarely fall back to INTX/GSI delivery.
  
  To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
  startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
  case, deferring it to be called from xenbus_probe() in the XS_HVM case
  instead.
  
  Then fix up the invocation of xenbus_probe() to happen either from its
  device_initcall if the callback is available early enough, or when the
  callback is finally set up. This means that the hack of calling
  xenbus_probe() from a workqueue after the first interrupt, or directly
  from the PCI platform device setup, is no longer needed.
  
  Signed-off-by: David Woodhouse 
  Reviewed-by: Boris Ostrovsky 
  Link: 
https://lore.kernel.org/r/20210113132606.422794-2-dw...@infradead.org
  Signed-off-by: Juergen Gross 
  Signed-off-by: Sasha Levin 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-amd64-amd64-xl-multivcpu.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-5.4/test-amd64-amd64-xl-multivcpu.guest-start
 --summary-out=tmp/158850.bisection-summary --basis-template=158387 
--blessings=real,real-bisect,real-retry linux-5.4 test-amd64-amd64-xl-multivcpu 
guest-start
Searching for failure / basis pass:
 158818 fail [host=godello1] / 158681 [host=chardonnay1] 158624 [host=fiano1] 
158616 [host=huxelrebe1] 158609 ok.
Failure / basis pass flights: 158818 / 158609
(tree with no url: minios)
Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0fbca6ce4174724f28be5268c5d210f51ed96e31 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c6be6dab9c4bdf135bc02b61ecc304d5511c3588 
3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 
7ea428895af2840d85c524f0bd11a38aac308308 
ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 
fbb3bf002b42803ef289ea2a649ebd5f25d22236
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
3b769c5110384fb33bcfeddced80f721ec7838cc 
3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 
7ea428895af2840d85c524f0bd11a38aac308308 
ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 
452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-0fbca6ce4174724f28be5268c5d210f51ed96e31
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/osstest/ovmf.git#3b769c5110384fb33bcfeddced80f721ec7838cc-c6be6dab9c4bdf135bc02b61ecc304d5511c3588
 git://xenbits.xen.org/qemu-xen-traditional\
 
.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764
 

[xen-unstable-coverity test] 158849: all pass - PUSHED

2021-01-31 Thread osstest service owner
flight 158849 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158849/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen  9dc687f155a57216b83b17f9cde55dd43e06b0cd
baseline version:
 xen  07edcd17fa2dce80250b3dd31e561268bc4663a9

Last test of basis   158704  2021-01-27 09:18:28 Z4 days
Testing same since   158849  2021-01-31 09:18:27 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony PERARD 
  Ian Jackson 
  Ian Jackson 
  Igor Druzhinin 
  Jan Beulich 
  Jason Andryuk 
  Juergen Gross 
  Julien Grall 
  Julien Grall 
  Manuel Bouyer 
  Marek Kasiewicz 
  Norbert Kamiński 
  Oleksandr Tyshchenko 
  Paul Durrant 
  Rahul Singh 
  Roger Pau Monné 
  Stefano Stabellini 
  Stefano Stabellini 
  Tamas K Lengyel 
  Wei Chen 
  Wei Liu 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   07edcd17fa..9dc687f155  9dc687f155a57216b83b17f9cde55dd43e06b0cd -> 
coverity-tested/smoke



[libvirt test] 158842: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158842 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158842/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 151777
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  4ab0d1844a1e60def576086edc8b2c3775e7c10d
baseline version:
 libvirt  2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  205 days
Failing since151818  2020-07-11 04:18:52 Z  204 days  199 attempts
Testing same since   158805  2021-01-30 04:20:01 Z1 days2 attempts


People who touched revisions under test:
  Adolfo Jayme Barrientos 
  Aleksandr Alekseev 
  Andika Triwidada 
  Andrea Bolognani 
  Balázs Meskó 
  Barrett Schonefeld 
  Bastien Orivel 
  Bihong Yu 
  Binfeng Wu 
  Boris Fiuczynski 
  Brian Turek 
  Christian Ehrhardt 
  Christian Schoenebeck 
  Cole Robinson 
  Collin Walling 
  Cornelia Huck 
  Cédric Bosdonnat 
  Côme Borsoi 
  Daniel Henrique Barboza 
  Daniel Letai 
  Daniel P. Berrange 
  Daniel P. Berrangé 
  Dmytro Linkin 
  Eiichi Tsukata 
  Erik Skultety 
  Fabian Affolter 
  Fabian Freyer 
  Fangge Jin 
  Farhan Ali 
  Fedora Weblate Translation 
  Guoyi Tu
  Göran Uddeborg 
  Halil Pasic 
  Han Han 
  Hao Wang 
  Helmut Grohne 
  Ian Wienand 
  Jamie Strandboge 
  Jamie Strandboge 
  Jan Kuparinen 
  Jean-Baptiste Holcroft 
  Jianan Gao 
  Jim Fehlig 
  Jin Yan 
  Jiri Denemark 
  John Ferlan 
  Jonathan Watt 
  Jonathon Jongsma 
  Julio Faracco 
  Ján Tomko 
  Kashyap Chamarthy 
  Kevin Locke 
  Laine Stump 
  Laszlo Ersek 
  Liao Pingfang 
  Lin Ma 
  Lin Ma 
  Lin Ma 
  Marc Hartmayer 
  Marc-André Lureau 
  Marek Marczykowski-Górecki 
  Markus Schade 
  Martin Kletzander 
  Masayoshi Mizuma 
  Matt Coleman 
  Matt Coleman 
  Mauro Matteo Cascella 
  Meina Li 
  Michal Privoznik 
  Michał Smyk 
  Milo Casagrande 
  Moshe Levi 
  Muha Aliss 
  Neal Gompa 
  Nick Shyrokovskiy 
  Nickys Music Group 
  Nico Pache 
  Nikolay Shirokovskiy 
  Olaf Hering 
  Olesya Gerasimenko 
  Orion Poplawski 
  Patrick Magauran 
  Paulo de Rezende Pinatti 
  Pavel Hrdina 
  Peter Krempa 
  Pino Toscano 
  Pino Toscano 
  Piotr Drąg 
  Prathamesh Chavan 
  Ricky Tigg 
  Roman Bogorodskiy 
  Roman Bolshakov 
  Ryan Gahagan 
  Ryan Schmidt 
  Sam Hartman 
  Scott Shambarger 
  Sebastian Mitterle 
  Shalini Chellathurai Saroja 
  Shaojun Yang 
  Shi Lei 
  Simon Gaiser 
  Stefan Bader 
  Stefan Berger 
  Szymon Scholz 
  Thomas Huth 
  Tim Wiederhake 
  Tomáš Golembiovský 
  Tomáš Janoušek 
  Tuguoyi 
  Wang Xin 
  Weblate 
  Yang Hang 
  Yanqiu Zhang 
  Yi Li 
  Yi Wang 
  Yuri Chornoivan 
  Zheng Chuan 
  zhenwei pi 
  Zhenyu Zheng 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-arm64-libvirt  fail
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvops 

[linux-linus test] 158825: regressions - FAIL

2021-01-31 Thread osstest service owner
flight 158825 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158825/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt   7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-libvirt-xsm   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-raw7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-shadow 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-amd64-xl  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start   fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail REGR. vs. 152332
 test-amd64-i386-examine   6 xen-install  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-libvirt 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian  fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot fail REGR. vs. 152332
 test-amd64-amd64-pair25 guest-start/debian   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-examine 13 examine-iommufail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop  fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop   fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-startfail REGR. vs. 152332
 test-armhf-armhf-xl  14