On Thu, 13 Nov 2025 17:07:39 +0100 Sebastian Andrzej Siewior <[email protected]> wrote:
> On 2025-11-13 10:51:06 [-0500], Steven Rostedt wrote: > > Yes, because they are only tested in sched_switch and fork and exit > > tracepoints. > > > > Although, this was written when tracepoint callbacks were always called > > under preempt disable. Perhaps we need to change that call to: > > > > tracepoint_synchronize_unregister() > > > > Or add a preempt_disable() around the callers. > > Please don't. Please do a regular rcu_read_lock() ;) > I tried to get rid of the preempt_disable() around tracepoints so that > the attached BPF callbacks are not invoked with disabled preemption. I > haven't followed up here in a while but I think Paul's SRCU work goes > in the right direction. I meant just reading the pid lists, which are usually called from tracepoints that are in preempt_disabled locations. Anyway, I can add rcu_read_lock() around the callers of it. > > > I'm very nervous about using RCU here. It will add a lot more corner cases > > that needs to be accounted for. The complexity doesn't appear to be worth > > it. I'd rather just keep the raw spin locks than to convert it to RCU. > > > > The seqcount makes sense to me. It's simple and keeps the same paradigm as > > what we have. What's wrong with using it? > > I'm fine with it once you explained under what conditions retry can > happen. Thank you. Thanks, -- Steve
