On Tue, Mar 31, 2026 at 1:05 AM Alexei Starovoitov
<[email protected]> wrote:
>
> On Sun, Mar 29, 2026 at 7:05 AM Chengkaitao <[email protected]> wrote:
> >
> > From: Kaitao Cheng <[email protected]>
> >
> > Replace per-kfunc btf_id chains check with btf_id_in_kfunc_table() and
> > static kfunc tables for easier maintenance.
> >
> > Prepare for future extensions to the bpf_list API family.
> >
> > Signed-off-by: Kaitao Cheng <[email protected]>
> > ---
> >  kernel/bpf/verifier.c | 261 +++++++++++++++++++++++-------------------
> >  1 file changed, 144 insertions(+), 117 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 4fbacd2149cd..f2d9863bb290 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -544,9 +544,6 @@ static bool is_async_callback_calling_kfunc(u32 btf_id);
> >  static bool is_callback_calling_kfunc(u32 btf_id);
> >  static bool is_bpf_throw_kfunc(struct bpf_insn *insn);
> >
> > -static bool is_bpf_wq_set_callback_kfunc(u32 btf_id);
> > -static bool is_task_work_add_kfunc(u32 func_id);
> > -
> >  static bool is_sync_callback_calling_function(enum bpf_func_id func_id)
> >  {
> >         return func_id == BPF_FUNC_for_each_map_elem ||
> > @@ -586,7 +583,7 @@ static bool is_async_cb_sleepable(struct 
> > bpf_verifier_env *env, struct bpf_insn
> >
> >         /* bpf_wq and bpf_task_work callbacks are always sleepable. */
> >         if (bpf_pseudo_kfunc_call(insn) && insn->off == 0 &&
> > -           (is_bpf_wq_set_callback_kfunc(insn->imm) || 
> > is_task_work_add_kfunc(insn->imm)))
> > +           is_async_callback_calling_kfunc(insn->imm))
> >                 return true;
> >
> >         verifier_bug(env, "unhandled async callback in 
> > is_async_cb_sleepable");
> > @@ -11203,31 +11200,6 @@ static int 
> > set_task_work_schedule_callback_state(struct bpf_verifier_env *env,
> >         return 0;
> >  }
> >
> > -static bool is_rbtree_lock_required_kfunc(u32 btf_id);
> > -
> > -/* Are we currently verifying the callback for a rbtree helper that must
> > - * be called with lock held? If so, no need to complain about unreleased
> > - * lock
> > - */
> > -static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env)
> > -{
> > -       struct bpf_verifier_state *state = env->cur_state;
> > -       struct bpf_insn *insn = env->prog->insnsi;
> > -       struct bpf_func_state *callee;
> > -       int kfunc_btf_id;
> > -
> > -       if (!state->curframe)
> > -               return false;
> > -
> > -       callee = state->frame[state->curframe];
> > -
> > -       if (!callee->in_callback_fn)
> > -               return false;
> > -
> > -       kfunc_btf_id = insn[callee->callsite].imm;
> > -       return is_rbtree_lock_required_kfunc(kfunc_btf_id);
> > -}
> > -
> >  static bool retval_range_within(struct bpf_retval_range range, const 
> > struct bpf_reg_state *reg)
> >  {
> >         if (range.return_32bit)
> > @@ -12639,11 +12611,103 @@ BTF_ID(func, bpf_session_is_return)
> >  BTF_ID(func, bpf_stream_vprintk)
> >  BTF_ID(func, bpf_stream_print_stack)
> >
> > -static bool is_task_work_add_kfunc(u32 func_id)
> > -{
> > -       return func_id == 
> > special_kfunc_list[KF_bpf_task_work_schedule_signal] ||
> > -              func_id == 
> > special_kfunc_list[KF_bpf_task_work_schedule_resume];
> > -}
> > +/* Kfunc family related to list. */
> > +static const enum special_kfunc_type bpf_list_api_kfuncs[] = {
> > +       KF_bpf_list_push_front_impl,
> > +       KF_bpf_list_push_back_impl,
> > +       KF_bpf_list_pop_front,
> > +       KF_bpf_list_pop_back,
> > +       KF_bpf_list_front,
> > +       KF_bpf_list_back,
> > +};
> > +
> > +/* Kfuncs that take a list node argument (bpf_list_node *). */
> > +static const enum special_kfunc_type bpf_list_node_api_kfuncs[] = {
> > +       KF_bpf_list_push_front_impl,
> > +       KF_bpf_list_push_back_impl,
> > +};
> > +
> > +/* Kfuncs that take an rbtree node argument (bpf_rb_node *). */
> > +static const enum special_kfunc_type bpf_rbtree_node_api_kfuncs[] = {
> > +       KF_bpf_rbtree_remove,
> > +       KF_bpf_rbtree_add_impl,
> > +       KF_bpf_rbtree_left,
> > +       KF_bpf_rbtree_right,
> > +};
> > +
> > +/* Kfunc family related to rbtree. */
> > +static const enum special_kfunc_type bpf_rbtree_api_kfuncs[] = {
> > +       KF_bpf_rbtree_add_impl,
> > +       KF_bpf_rbtree_remove,
> > +       KF_bpf_rbtree_first,
> > +       KF_bpf_rbtree_root,
> > +       KF_bpf_rbtree_left,
> > +       KF_bpf_rbtree_right,
> > +};
> > +
> > +/* Kfunc family related to spin_lock. */
> > +static const enum special_kfunc_type bpf_res_spin_lock_api_kfuncs[] = {
> > +       KF_bpf_res_spin_lock,
> > +       KF_bpf_res_spin_unlock,
> > +       KF_bpf_res_spin_lock_irqsave,
> > +       KF_bpf_res_spin_unlock_irqrestore,
> > +};
>
> I think it's a step in the wrong direction.
> I'd wait for Ihor's BTF_ID_NAMED cleanup.

After reading Ihor's messages on the list, if I understand correctly,
our two approaches seem to target different problems. What Ihor's
work appears to achieve is the ability to remove the entire enum
special_kfunc_type. My goal, on the other hand, is to replace many
scattered func_id == special_kfunc_list[...] comparisons with a
table-driven approach.

That said, once Ihor's solution lands, my approach would no longer
be applicable as-is. So I have been thinking about an alternative,
please see the patch linked below for details:
https://lore.kernel.org/bpf/[email protected]/

> Kaitao Cheng,
>
> also please start your part of code reviews.
> Your patches are not going to be landing if you don't code review.
>
> https://lore.kernel.org/bpf/CAADnVQ+TKKptnNB25V3=bcdybh5g6c2dyw2sytxvyravnpn...@mail.gmail.com/

I will do my best to carry this forward.

-- 
Yours,
Chengkaitao

Reply via email to