On 12/9/2025 7:06 PM, Konstantin Ananyev wrote: > > >> On Mon, 8 Dec 2025 09:43:01 +0000 >> Konstantin Ananyev <[email protected]> wrote: >> >>>> Allow users to provide custom >>>> memory allocation hooks for runtime memory in rte_acl_ctx, via >>>> struct rte_acl_mem_hook. >>> >>> LGTM in general, few extra comments below. >>> >>>> Key changes: >>>> - Added struct rte_acl_mem_hook with zalloc, free, and udata. >>>> - Added rte_acl_set_mem_hook / rte_acl_get_mem_hook to set/get >> callbacks. >>>> - Default allocation uses existing rte_zmalloc_socket/rte_free. >>>> - Modified ACL code to call callbacks for runtime allocations instead >>>> of rte_zmalloc_socket/rte_free directly. >>>> >>>> v5: >>>> - Remove temporary memory allocation callback for build stage. >>>> - Introduce new API (rte_acl_set_mem_hook / rte_acl_get_mem_hook) >>>> instead of modifying existing rte_acl_config to preserve >>>> ABI compatibility. >>>> >>>> v6: >>>> - Reworked API to meet consistency and naming conventions. >>>> - Adjusted parameter order for better readability and alignment. >>>> - Renamed internal variables for clarity and code consistency. >>>> >>>> Signed-off-by: YongFeng Wang <[email protected]> >>>> --- >>>> app/test/test_acl.c | 121 ++++++++++++++++++ >>>> .../prog_guide/packet_classif_access_ctrl.rst | 31 +++++ >>>> lib/acl/acl.h | 1 + >>>> lib/acl/acl_bld.c | 2 +- >>>> lib/acl/acl_gen.c | 4 +- >>>> lib/acl/rte_acl.c | 45 ++++++- >>>> lib/acl/rte_acl.h | 47 +++++++ >>>> 7 files changed, 247 insertions(+), 4 deletions(-) >>>> >>>> diff --git a/app/test/test_acl.c b/app/test/test_acl.c >>>> index 43d13b5b0f..3c9a0cb8c0 100644 >>>> --- a/app/test/test_acl.c >>>> +++ b/app/test/test_acl.c >>>> @@ -1721,6 +1721,125 @@ test_u32_range(void) >>>> return rc; >>>> } >>>> >>>> +struct acl_ctx_wrapper { >>>> + struct rte_acl_ctx *ctx; >>>> + void *running_buf; >>>> + bool running_buf_using; >>>> +}; >>>> + >>>> +#define ACL_RUNNING_BUF_SIZE (10 * 1024 * 1024) >>>> + >>>> +static void *running_alloc(char *name, size_t size, >>>> + size_t align, int32_t socket_id, void *udata) >>>> +{ >>>> + RTE_SET_USED(align); >>>> + RTE_SET_USED(name); >>>> + RTE_SET_USED(socket_id); >>>> + if (size > ACL_RUNNING_BUF_SIZE) >>>> + return NULL; >>>> + struct acl_ctx_wrapper *acl_ctx = (struct acl_ctx_wrapper *)udata; >>>> + if (acl_ctx->running_buf_using) >>>> + return NULL; >>>> + printf("running memory alloc for acl context, size=%zu, pointer=%p\n", >>>> + size, >>>> + acl_ctx->running_buf); >>>> + memset(acl_ctx->running_buf, 0, size); >>>> + acl_ctx->running_buf_using = true; >>>> + return acl_ctx->running_buf; >>>> +} >>> >>> Is there any point to have such memhook in our UT? >>> From one side: it doesn't test anything new, as memory is still allocsted >>> via >> rte_zmalloc(). >>> From other side it is error prone, as you don't check that pre-allocated >>> buffer >>> will really satisfy requested size and alignment parameters. >>> Might be just use libc malloc/free here? >> >> A lot of the problems would go away if ACL just used regular malloc/free >> more, >> and rte_malloc/rte_free less. > > It uses rte_malloc in just two places - to allocate ctx itself and for actual > Run-Time table. > All temporary allocations are done with normal malloc. > There are obvious reasons why people prefer to use rte_malloc-ed memory > in their data-path functions: rte-malloc-ed memory uses hugepages and is MP > shared. > So I suppose providing users a choice where they want their ACL tables to be > located > is a good option.
There is a global acl list (rte_acl_tailq) which could across multi-process, so that main process create one acl, then secondary process could get the same acl by rte_acl_create() with same name. This based on the acl library use rte_malloc. Now the base is broken when introduce this commit. > >> The existing rte_malloc is slow and fragments badly. > Then we probably need to improve it, don't we? > >

