On 2/12/26 11:23 PM, Hui Zhu wrote:
From: Hui Zhu <[email protected]>
Replace hardcoded enum values with bpf_core_enum_value() calls in
cgroup_iter_memcg test to improve portability across different
kernel versions.
The change adds runtime enum value resolution for:
- node_stat_item: NR_ANON_MAPPED, NR_SHMEM, NR_FILE_PAGES,
NR_FILE_MAPPED
- memcg_stat_item: MEMCG_KMEM
- vm_event_item: PGFAULT
This ensures the BPF program can adapt to enum value changes
between kernel versions, returning early if any enum value is
unavailable (returns 0).
Signed-off-by: Hui Zhu <[email protected]>
---
.../selftests/bpf/progs/cgroup_iter_memcg.c | 41 +++++++++++++++----
1 file changed, 34 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c
b/tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c
index 59fb70a3cc50..b020951dd7e6 100644
--- a/tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c
+++ b/tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c
@@ -15,6 +15,8 @@ int cgroup_memcg_query(struct bpf_iter__cgroup *ctx)
struct cgroup *cgrp = ctx->cgroup;
struct cgroup_subsys_state *css;
struct mem_cgroup *memcg;
+ int ret = 1;
+ int idx;
if (!cgrp)
return 1;
@@ -26,14 +28,39 @@ int cgroup_memcg_query(struct bpf_iter__cgroup *ctx)
bpf_mem_cgroup_flush_stats(memcg);
- memcg_query.nr_anon_mapped = bpf_mem_cgroup_page_state(memcg, NR_ANON_MAPPED);
- memcg_query.nr_shmem = bpf_mem_cgroup_page_state(memcg, NR_SHMEM);
- memcg_query.nr_file_pages = bpf_mem_cgroup_page_state(memcg,
NR_FILE_PAGES);
- memcg_query.nr_file_mapped = bpf_mem_cgroup_page_state(memcg,
NR_FILE_MAPPED);
- memcg_query.memcg_kmem = bpf_mem_cgroup_page_state(memcg, MEMCG_KMEM);
- memcg_query.pgfault = bpf_mem_cgroup_vm_events(memcg, PGFAULT);
+ idx = bpf_core_enum_value(enum node_stat_item, NR_ANON_MAPPED);
+ if (idx == 0)
+ goto out;
+ memcg_query.nr_anon_mapped = bpf_mem_cgroup_page_state(memcg, idx);
+ idx = bpf_core_enum_value(enum node_stat_item, NR_SHMEM);
+ if (idx == 0)
+ goto out;
+ memcg_query.nr_shmem = bpf_mem_cgroup_page_state(memcg, idx);
+
+ idx = bpf_core_enum_value(enum node_stat_item, NR_FILE_PAGES);
+ if (idx == 0)
+ goto out;
+ memcg_query.nr_file_pages = bpf_mem_cgroup_page_state(memcg, idx);
+
+ idx = bpf_core_enum_value(enum node_stat_item, NR_FILE_MAPPED);
+ if (idx == 0)
+ goto out;
+ memcg_query.nr_file_mapped = bpf_mem_cgroup_page_state(memcg, idx);
+
+ idx = bpf_core_enum_value(enum memcg_stat_item, MEMCG_KMEM);
+ if (idx == 0)
+ goto out;
+ memcg_query.memcg_kmem = bpf_mem_cgroup_page_state(memcg, idx);
+
+ idx = bpf_core_enum_value(enum vm_event_item, PGFAULT);
+ if (idx == 0)
+ goto out;
This is getting messy. Most of these values should be on any system
regardless of boot-params. I was expecting you would make the call
inline.
Let's simplify this whole series. This selftest only needs to exercise
bpf_mem_cgroup_page_state() and bpf_mem_cgroup_vm_events(). I think we
can remove the kmem subtest altogether and still have the coverage we
need. You could then call bpf_core_enum_value() inline as an argument to
the kfuncs above and not need to check if they exist (just let them give
you the host-specific co-re value).