On 2/12/26 7:07 AM, Shakeel Butt wrote:
On Wed, Feb 11, 2026 at 08:51:08PM -0800, JP Kobryn wrote:
It would be useful to see a breakdown of allocations to understand which
NUMA policies are driving them. For example, when investigating memory
pressure, having policy-specific counts could show that allocations were
bound to the affected node (via MPOL_BIND).

Add per-policy page allocation counters as new node stat items. These
counters can provide correlation between a mempolicy and pressure on a
given node.

Signed-off-by: JP Kobryn <[email protected]>
Suggested-by: Johannes Weiner <[email protected]>

[...]

  int mempolicy_set_node_perf(unsigned int node, struct access_coordinate 
*coords)
  {
        struct weighted_interleave_state *new_wi_state, *old_wi_state = NULL;
@@ -2446,8 +2461,14 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsigned 
int order,
nodemask = policy_nodemask(gfp, pol, ilx, &nid); - if (pol->mode == MPOL_PREFERRED_MANY)
-               return alloc_pages_preferred_many(gfp, order, nid, nodemask);
+       if (pol->mode == MPOL_PREFERRED_MANY) {
+               page = alloc_pages_preferred_many(gfp, order, nid, nodemask);
+               if (page)
+                       __mod_node_page_state(page_pgdat(page),
+                                       mpol_node_stat(MPOL_PREFERRED_MANY), 1 
<< order);

Here and two places below, please use mod_node_page_state() instead of
__mod_node_page_state() as __foo() requires preempt disable or if the
given stat can be updated in IRQ, then IRQ disable. This code path does
not do either of that.

Thanks, I also see syzbot flagged this as well. I can make this change
in v2.

Reply via email to