On Fri, 2016-06-03 at 12:10 +0800, xinhui wrote:
> On 2016年06月03日 09:32, Benjamin Herrenschmidt wrote:
> > On Fri, 2016-06-03 at 11:32 +1000, Benjamin Herrenschmidt wrote:
> >> On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
> >>>
> >>> Base code to enable qspinlock on powerpc. this patch add some
> >>> #ifdef
> >>> here and there. Although there is no paravirt related code, we
> can
> >>> successfully build a qspinlock kernel after apply this patch.
> >> This is missing the IO_SYNC stuff ... It means we'll fail to do a
> >> full
> >> sync to order vs MMIOs.
> >>
> >> You need to add that back in the unlock path.
> >
> > Well, and in the lock path as well...
> >
> Oh, yes. I missed IO_SYNC stuff.
> 
> thank you, Ben :)

Ok couple of other things that would be nice from my perspective (and
Michael's) if you can produce them:

 - Some benchmarks of the qspinlock alone, without the PV stuff,
   so we understand how much of the overhead is inherent to the
   qspinlock and how much is introduced by the PV bits.

 - For the above, can you show (or describe) where the qspinlock
   improves things compared to our current locks. While there's
   theory and to some extent practice on x86, it would be nice to
   validate the effects on POWER.

 - Comparative benchmark with the PV stuff in on a bare metal system
   to understand the overhead there.

 - Comparative benchmark with the PV stuff under pHyp and KVM

Spinlocks are fiddly and a critical piece of infrastructure, it's
important we fully understand the performance implications before we
decide to switch to a new model.

Cheers,
Ben.

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to