On Tue, 30 Jan 2024 16:22:06 +0000 Vincent Donnefort <vdonnef...@google.com> wrote: > > > +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer) > > > +{ > > > + struct trace_buffer_meta *meta = cpu_buffer->meta_page; > > > + > > > + WRITE_ONCE(meta->reader.read, cpu_buffer->reader_page->read); > > > + WRITE_ONCE(meta->reader.id, cpu_buffer->reader_page->id); > > > + WRITE_ONCE(meta->reader.lost_events, cpu_buffer->lost_events); > > > + > > > + WRITE_ONCE(meta->entries, local_read(&cpu_buffer->entries)); > > > + WRITE_ONCE(meta->overrun, local_read(&cpu_buffer->overrun)); > > > + WRITE_ONCE(meta->read, cpu_buffer->read); > > > +} > > > + > > > static void > > > rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) > > > { > > > @@ -5204,6 +5225,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) > > > cpu_buffer->lost_events = 0; > > > cpu_buffer->last_overrun = 0; > > > > > > + if (cpu_buffer->mapped) > > > > There are some cpu_buffer->mapped are accessed via WRITE_ONCE/READ_ONCE() > > but others are not. What makes those different? > > The cpu_buffer->mapped is READ_ONCE for the section where it is not protected > with a lock. That is (in this version) only ring_buffer_swap_cpu(). > > That said... > > a. This is not enough protection at this level, Ideally the _map should also > call synchronize_rcu() to make sure the _swap does see the ->mapped > 0. > > b. With refcount for the snapshot in trace/trace.c, it is not possible for > those > functions to race. trace_arm_snapshot() and tracing_buffers_mmap() are > mutually > exclusive and already stabilized with the trace mutex. > > So how about I completely remove those WRITE_ONCE/READ_ONCE and just rely on > the > protection given in trace/trace.c instead of duplicating it in > trace/ring_buffer.c?
I would add a comment and replace the READ_ONCE with WARN_ON_ONCE(): /* It is up to the callers to not swap mapped buffers */ if (WARN_ON_ONCE(cpu_buffer_a->mapped || cpu_buffer_b->mapped)) { ret = -EBUSY; goto out; } -- Steve