On Tue Jun 20, 2023 at 8:27 PM AEST, Cédric Le Goater wrote:
> On 6/20/23 12:12, Nicholas Piggin wrote:
> > On Wed Jun 7, 2023 at 12:09 AM AEST, Cédric Le Goater wrote:
> >> On 6/5/23 13:23, Nicholas Piggin wrote:
> >>> Previous RFC here
> >>>
> >>> https://lists.gnu.org/archive/html/qemu-ppc/2023-05/msg00453.html
> >>>
> >>> This series drops patch 1 from the previous, which is more of
> >>> a standalone bugfix.
> >>>
> >>> Also accounted for Cedric's comments, except a nicer way to
> >>> set cpu_index vs PIR/TIR SPRs, which is not quite trivial.
> >>>
> >>> This limits support for SMT to POWER8 and newer. It is also
> >>> incompatible with nested-HV so that is checked for too.
> >>>
> >>> Iterating CPUs to find siblings for now I kept because similar
> >>> loops exist in a few places, and it is not conceptually
> >>> difficult for SMT, just fiddly code to improve. For now it
> >>> should not be much performane concern.
> >>>
> >>> I removed hypervisor msgsnd support from patch 3, which is not
> >>> required for spapr and added significantly to the patch.
> >>>
> >>> For now nobody has objected to the way shared SPR access is
> >>> handled (serialised with TCG atomics support) so we'll keep
> >>> going with it.
> >>
> >> Cc:ing more people for possible feedback.
> > 
> > Not much feedback so I'll plan to go with this.
> > 
> > A more performant implementation might try to synchronize
> > threads at the register level rather than serialize everything,
> > but SMT shared registers are not too performance critical so
> > this should do for now.
>
> yes. Could you please rebase this series on upstream ?

Oh yeah I should have said, I will rebase and resend.

> It would be good to add tests for SMT. May be we could extend :
>
>    tests/avocado/ppc_pseries.py
>
> with a couple of extra QEMU configs adding 'threads=' (if possible) and
> check :
>
>    "CPU maps initialized for Y threads per core"
>
> and
>
>    "smp: Brought up 1 node, X*Y CPUs"
>
> ?

Yeah that could be a good idea, I'll try it.

Thanks,
Nick

Reply via email to