On Sun, Aug 16, 2020 at 03:38:20PM +0200, Cédric Le Goater wrote:
> On 8/16/20 6:30 AM, David Gibson wrote:
> > On Fri, Aug 14, 2020 at 05:08:13PM +0200, Cédric Le Goater wrote:
> >>
> >> This works as expected with a 128 vCPUs guest with pinned vcpus. The
> >> first 64 IPIs are allocated on the first chip and the remaining 64
> >> on the second chip.
> >>
> >> Still, this is more an RFC. We have time before the end of the merge
> >> window.
> > 
> > It looks reasonable to me.  AFAICT it makes things better than they
> > were, and even if we can improve it further, that won't break
> > migration or other interfaces we need to preserve.
> 
> Yeah. What I don't like is this test below. I am not sure that 
> machine->smp.cpus is the correct way to test the number of currently
> active vCPUs.

Ah, yeah.  It should be correct at initial startup, but might not be
after a bunch of hotplugs/unplugs, which could result in a sparse set
of active vcpus.

Usually this code will be called during initial setup, but I think
there are some edge cases where it won't be (e.g. boot a XICS kernel,
do some vcpu plugs/unplugs, reboot into a XIVE kernel).

So I think we need to explicitly check for each vcpu # if it's
currently active.  Using... spapr_find_cpu(), I guess?

> 
> >>> +    if (srcno < machine->smp.cpus) {
> >>> +        return kvmppc_xive_reset_ipi(xive, srcno, errp);
> >>> +    }
> >>> +
> >>>      if (xive_source_irq_is_lsi(xsrc, srcno)) {
> >>>          state |= KVM_XIVE_LEVEL_SENSITIVE;
> >>>          if (xsrc->status[srcno] & XIVE_STATUS_ASSERTED) {
> 

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature

Reply via email to