On 1/19/26 16:03, Zhengmian Hu wrote:
When aa_get_buffer() pulls from the per-cpu list it unconditionally
decrements cache->hold. If hold reaches 0 while count is still non-zero,
the unsigned decrement wraps to UINT_MAX. This keeps hold non-zero for a
very long time, so aa_put_buffer() never returns buffers to the global
list, which can starve other CPUs and force repeated kmalloc(aa_g_path_max)
allocations.

Guard the decrement so hold never underflows.

Signed-off-by: Zhengmian Hu <[email protected]>

thanks, Acked-by: John Johansen <[email protected]>

I have pulled this into apparmor-next

---
  security/apparmor/lsm.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 9b6c2f157..a6c884ba6 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -1868,7 +1868,8 @@ char *aa_get_buffer(bool in_atomic)
        if (!list_empty(&cache->head)) {
                aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
                list_del(&aa_buf->list);
-               cache->hold--;
+               if (cache->hold)
+                       cache->hold--;
                cache->count--;
                put_cpu_ptr(&aa_local_buffers);
                return &aa_buf->buffer[0];


Reply via email to