On Mon, Jan 05, 2026 at 10:59:31AM -0500, Steven Rostedt wrote:
> On Fri,  2 Jan 2026 19:23:33 -0500
> Joel Fernandes <[email protected]> wrote:
> 
> > +#ifdef CONFIG_RCU_PER_CPU_BLOCKED_LISTS
> > +/*
> > + * Promote blocked tasks from a single CPU's per-CPU list to the rnp list.
> > + *
> > + * If there are no tracked blockers (gp_tasks NULL) and this CPU
> > + * is still blocking the corresponding GP (bit set in qsmask), set
> > + * the pointer to ensure the GP machinery knows about the blocking task.
> > + * This handles late promotion during QS reporting, where tasks may have
> > + * blocked after rcu_gp_init() or sync_exp_reset_tree() ran their scans.
> > + */
> > +static void rcu_promote_blocked_tasks_rdp(struct rcu_data *rdp,
> > +                                     struct rcu_node *rnp)
> > +{
> > +   struct task_struct *t, *tmp;
> > +
> > +   raw_lockdep_assert_held_rcu_node(rnp);
> > +
> > +   raw_spin_lock(&rdp->blkd_lock);
> > +   list_for_each_entry_safe(t, tmp, &rdp->blkd_list, rcu_rdp_entry) {
> 
> How big can this list be? This would be considered an unbounded latency for
> PREEMPT_RT. If this is needed, then we need to disable this when PREEMPT_RT
> is enabled.

Steve, thanks. This is still quite a bit in the experimental/RFC phase, but
if we ever were to do this, we could splice the list of tasks into O(1)
instead of O(N) I am doing here. Great point.

Thanks for the suggestions about the guards as well on the other patch, I
shall use that where possible in any of my new code.

thanks,

 - Joel


Reply via email to