On 2026-01-12 03:42, Michal Hocko wrote:
Hi,
sorry to jump in this late but the timing of previous versions didn't
really work well for me.

On Sun 11-01-26 14:49:57, Mathieu Desnoyers wrote:
[...]
Here is a (possibly incomplete) list of the prior approaches that were
used or proposed, along with their downside:

1) Per-thread rss tracking: large error on many-thread processes.

2) Per-CPU counters: up to 12% slower for short-lived processes and 9%
    increased system time in make test workloads [1]. Moreover, the
    inaccuracy increases with O(n^2) with the number of CPUs.

3) Per-NUMA-node counters: requires atomics on fast-path (overhead),
    error is high with systems that have lots of NUMA nodes (32 times
    the number of NUMA nodes).

The approach proposed here is to replace this by the hierarchical
per-cpu counters, which bounds the inaccuracy based on the system
topology with O(N*logN).

The concept of hierarchical pcp counter is interesting and I am
definitely not opposed if there are more users that would benefit.

 From the OOM POV, IIUC the primary problem is that get_mm_counter
(percpu_counter_read_positive) is too imprecise on systems when the task
is moving around a large number of cpus. In the list of alternative
solutions I do not see percpu_counter_sum_positive to be mentioned.
oom_badness() is a really slow path and taking the slow path to
calculate a much more precise value seems acceptable. Have you
considered that option?
I must admit I assumed that since there was already a mechanism in place
to ensure it's not necessary to sum per-cpu counters when the oom killer
is trying to select tasks, it must be because this

  O(nr_possible_cpus * nr_processes)

operation must be too slow for the oom killer requirements.

AFAIU, the oom killer is executed when the memory allocator fails to
allocate memory, which can be within code paths which need to progress
eventually. So even though it's a slow path compared to the allocator
fast path, there must be at least _some_ expectations about it
completing within a decent amount of time. What would that ballpark be ?

To give an order of magnitude, I've tried modifying the upstream
oom killer to use percpu_counter_sum_positive and compared it to
the hierarchical approach:

AMD EPYC 9654 96-Core (2 sockets)
Within a KVM, configured with 256 logical cpus.

                   nr_processes=40    nr_processes=10000
Counter sum:            0.4 ms             81.0 ms
HPCC with 2-pass:       0.3 ms              9.3 ms

So as we scale up the number of processes on large SMP systems,
the latency caused by the oom killer task selection greatly
increases with the counter sums compared with the hierarchical
approach.

Thanks,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Reply via email to