On Wed, May 15, 2019 at 01:04:16PM +0300, Alexander Shishkin wrote:
> Peter Zijlstra <pet...@infradead.org> writes:
> 
> > On Wed, May 15, 2019 at 09:51:07AM +0300, Alexander Shishkin wrote:
> >> Yabin Cui <yab...@google.com> writes:
> >> 
> >> > diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
> >> > index 674b35383491..0b9aefe13b04 100644
> >> > --- a/kernel/events/ring_buffer.c
> >> > +++ b/kernel/events/ring_buffer.c
> >> > @@ -54,8 +54,10 @@ static void perf_output_put_handle(struct 
> >> > perf_output_handle *handle)
> >> >           * IRQ/NMI can happen here, which means we can miss a head 
> >> > update.
> >> >           */
> >> >  
> >> > -        if (!local_dec_and_test(&rb->nest))
> >> > +        if (local_read(&rb->nest) > 1) {
> >> > +                local_dec(&rb->nest);
> >> 
> >> What stops rb->nest changing between local_read() and local_dec()?
> >
> > Nothing, however it must remain the same :-)
> >
> > That is the cryptic way of saying that since these buffers are strictly
> > per-cpu, the only change can come from interrupts, and they must have a
> > net 0 change. Or rather, an equal amount of decrements to increments.
> >
> > So if it changes, it must also change back to where it was.
> 
> Ah that's true. So the whole ->nest thing can be done with
> READ_ONCE()/WRITE_ONCE() instead?
> Because the use of local_dec_and_test() creates an impression that we
> rely on atomicity of it, which in actuality we don't.

Yes, I think we can get away with that. And that might be a worth-while
optimization for !x86.

Reply via email to