On Fri, Nov 6, 2020 at 5:28 PM Song Liu <songliubrav...@fb.com> wrote: > > > > > On Nov 6, 2020, at 3:02 PM, Andrii Nakryiko <and...@kernel.org> wrote: > > > > Adjust in-kernel BTF implementation to support a split BTF mode of > > operation. > > Changes are mostly mirroring libbpf split BTF changes, with the exception of > > start_id being 0 for in-kernel implementation due to simpler read-only mode. > > > > Otherwise, for split BTF logic, most of the logic of jumping to base BTF, > > where necessary, is encapsulated in few helper functions. Type numbering and > > string offset in a split BTF are logically continuing where base BTF ends, > > so > > most of the high-level logic is kept without changes. > > > > Type verification and size resolution is only doing an added resolution of > > new > > split BTF types and relies on already cached size and type resolution > > results > > in the base BTF. > > > > Signed-off-by: Andrii Nakryiko <and...@kernel.org> > > [...] > > > > > @@ -600,8 +618,15 @@ static const struct btf_kind_operations > > *btf_type_ops(const struct btf_type *t) > > > > static bool btf_name_offset_valid(const struct btf *btf, u32 offset) > > { > > - return BTF_STR_OFFSET_VALID(offset) && > > - offset < btf->hdr.str_len; > > + if (!BTF_STR_OFFSET_VALID(offset)) > > + return false; > > +again: > > + if (offset < btf->start_str_off) { > > + btf = btf->base_btf; > > + goto again; > > Can we do a while loop instead of "goto again;"?
yep, not sure why I went with goto... while (offset < btf->start_str_off) btf = btf->base_btf; Shorter. > > > + } > > + offset -= btf->start_str_off; > > + return offset < btf->hdr.str_len; > > } > > > > static bool __btf_name_char_ok(char c, bool first, bool dot_ok) > > @@ -615,10 +640,25 @@ static bool __btf_name_char_ok(char c, bool first, > > bool dot_ok) > > return true; > > } > > > > +static const char *btf_str_by_offset(const struct btf *btf, u32 offset) > > +{ > > +again: > > + if (offset < btf->start_str_off) { > > + btf = btf->base_btf; > > + goto again; > > + } > > Maybe add a btf_find_base_btf(btf, offset) helper for this logic? No strong feelings about this, but given it's a two-line loop might not be worth it. I'd also need a pretty verbose btf_find_base_btf_for_str_offset() and btf_find_base_btf_for_type_id(). I feel like loop might be less distracting actually. > > > + > > + offset -= btf->start_str_off; > > + if (offset < btf->hdr.str_len) > > + return &btf->strings[offset]; > > + > > + return NULL; > > +} > > + > > [...] > > > } > > > > const char *btf_name_by_offset(const struct btf *btf, u32 offset) > > { > > - if (offset < btf->hdr.str_len) > > - return &btf->strings[offset]; > > - > > - return NULL; > > + return btf_str_by_offset(btf, offset); > > } > > IIUC, btf_str_by_offset() and btf_name_by_offset() are identical. Can we > just keep btf_name_by_offset()? btf_str_by_offset() is static, so should be inlinable, while btf_name_by_offset() is a global function, I was worrying about regressing performance for __btf_name_valid() and __btf_name_by_offset(). Premature optimization you think? > > > > > const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id) > > { > > - if (type_id > btf->nr_types) > > - return NULL; > > +again: > > + if (type_id < btf->start_id) { > > + btf = btf->base_btf; > > + goto again; > > + } > > ditto, goto again.. > > [...] > >