On 2/12/26 22:25, JP Kobryn wrote:
> On 2/12/26 7:24 AM, Vlastimil Babka wrote:
>> On 2/12/26 05:51, JP Kobryn wrote:
>>> It would be useful to see a breakdown of allocations to understand which
>>> NUMA policies are driving them. For example, when investigating memory
>>> pressure, having policy-specific counts could show that allocations were
>>> bound to the affected node (via MPOL_BIND).
>>>
>>> Add per-policy page allocation counters as new node stat items. These
>>> counters can provide correlation between a mempolicy and pressure on a
>>> given node.
>>>
>>> Signed-off-by: JP Kobryn <[email protected]>
>>> Suggested-by: Johannes Weiner <[email protected]>
>> 
>> Are the numa_{hit,miss,etc.} counters insufficient? Could they be extended
>> in a way that would capture any missing important details? A counter per
>> policy type seems exhaustive, but then on one hand it might be not important
>> to distinguish beetween some of them, and on the other hand it doesn't track
>> the nodemask anyway.
> 
> The two patches of the series should complement each other. When
> investigating memory pressure, we could identify the affected nodes
> (patch 2). Then we can cross-reference the policy-specific stats to find
> any correlation (this patch).
> 
> I think extending numa_* counters would call for more permutations to
> account for the numa stat per policy. I think distinguishing between
> MPOL_DEFAULT and MPOL_BIND is meaningful, for example. Am I

Are there other useful examples or would it be enough to add e.g. a
numa_bind counter to the numa_hit/miss/etc?
What I'm trying to say the level of detail you are trying to add to the
always-on counters seems like more suitable for tracepoints. The counters
should be limited to what's known to be useful and not "everything we are
able to track and possibly could need one day".

> understanding your question?


Reply via email to