Use the precise, albeit slower, precise RSS counter sums for the OOM
killer task selection and proc statistics. The approximated value is
too imprecise on large many-core systems.

The following rss tracking issues were noted by Sweet Tea Dorminy [1],
which lead to picking wrong tasks as OOM kill target:

  Recently, several internal services had an RSS usage regression as part of a
  kernel upgrade. Previously, they were on a pre-6.2 kernel and were able to
  read RSS statistics in a backup watchdog process to monitor and decide if
  they'd overrun their memory budget. Now, however, a representative service
  with five threads, expected to use about a hundred MB of memory, on a 250-cpu
  machine had memory usage tens of megabytes different from the expected amount
  -- this constituted a significant percentage of inaccuracy, causing the
  watchdog to act.

  This was a result of commit f1a7941243c1 ("mm: convert mm's rss stats
  into percpu_counter") [1].  Previously, the memory error was bounded by
  64*nr_threads pages, a very livable megabyte. Now, however, as a result of
  scheduler decisions moving the threads around the CPUs, the memory error could
  be as large as a gigabyte.

  This is a really tremendous inaccuracy for any few-threaded program on a
  large machine and impedes monitoring significantly. These stat counters are
  also used to make OOM killing decisions, so this additional inaccuracy could
  make a big difference in OOM situations -- either resulting in the wrong
  process being killed, or in less memory being returned from an OOM-kill than
  expected.

Here is a (possibly incomplete) list of the prior approaches that were
used or proposed, along with their downside:

1) Per-thread rss tracking: large error on many-thread processes.

2) Per-CPU counters: up to 12% slower for short-lived processes and 9%
   increased system time in make test workloads [1]. Moreover, the
   inaccuracy increases with O(n^2) with the number of CPUs.

3) Per-NUMA-node counters: requires atomics on fast-path (overhead),
   error is high with systems that have lots of NUMA nodes (32 times
   the number of NUMA nodes).

The simple fix proposed here is to do the precise per-cpu counters sum
every time a counter value needs to be read. This applies to the OOM
killer task selection, to the /proc statistics, and to the oom mark_victim
trace event.

Note that commit 82241a83cd15 ("mm: fix the inaccurate memory statistics
issue for users") introduced get_mm_counter_sum() for precise proc
memory status queries for _some_ proc files. This change renames
get_mm_counter_sum() to get_mm_counter(), thus moving the rest of the
proc files to the precise sum.

This change effectively increases the latency introduced when the OOM
killer executes in favor of doing a more precise OOM target task
selection. Effectively, the OOM killer iterates on all tasks, for all
relevant page types, for which the precise sum iterates on all possible
CPUs.

As a reference, here is the execution time of the OOM killer
before/after the change:

AMD EPYC 9654 96-Core (2 sockets)
Within a KVM, configured with 256 logical cpus.

                                  |  before  |  after   |
----------------------------------|----------|----------|
nr_processes=40                   |  0.3 ms  |   0.5 ms |
nr_processes=10000                |  3.0 ms  |  80.0 ms |

Suggested-by: Michal Hocko <[email protected]>
Fixes: f1a7941243c1 ("mm: convert mm's rss stats into percpu_counter")
Link: 
https://lore.kernel.org/lkml/[email protected]/ 
# [1]
Signed-off-by: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Martin Liu <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: [email protected]
Cc: Shakeel Butt <[email protected]>
Cc: SeongJae Park <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Sweet Tea Dorminy <[email protected]>
Cc: Lorenzo Stoakes <[email protected]>
Cc: "Liam R . Howlett" <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Christian Brauner <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Miaohe Lin <[email protected]>
Cc: Al Viro <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Yu Zhao <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: Mateusz Guzik <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Baolin Wang <[email protected]>
Cc: Aboorva Devarajan <[email protected]>
---
 fs/proc/task_mmu.c | 14 +++++++-------
 include/linux/mm.h |  5 -----
 2 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 81dfc26bfae8..8ca4fbf53fc5 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -39,9 +39,9 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
        unsigned long text, lib, swap, anon, file, shmem;
        unsigned long hiwater_vm, total_vm, hiwater_rss, total_rss;
 
-       anon = get_mm_counter_sum(mm, MM_ANONPAGES);
-       file = get_mm_counter_sum(mm, MM_FILEPAGES);
-       shmem = get_mm_counter_sum(mm, MM_SHMEMPAGES);
+       anon = get_mm_counter(mm, MM_ANONPAGES);
+       file = get_mm_counter(mm, MM_FILEPAGES);
+       shmem = get_mm_counter(mm, MM_SHMEMPAGES);
 
        /*
         * Note: to minimize their overhead, mm maintains hiwater_vm and
@@ -62,7 +62,7 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
        text = min(text, mm->exec_vm << PAGE_SHIFT);
        lib = (mm->exec_vm << PAGE_SHIFT) - text;
 
-       swap = get_mm_counter_sum(mm, MM_SWAPENTS);
+       swap = get_mm_counter(mm, MM_SWAPENTS);
        SEQ_PUT_DEC("VmPeak:\t", hiwater_vm);
        SEQ_PUT_DEC(" kB\nVmSize:\t", total_vm);
        SEQ_PUT_DEC(" kB\nVmLck:\t", mm->locked_vm);
@@ -95,12 +95,12 @@ unsigned long task_statm(struct mm_struct *mm,
                         unsigned long *shared, unsigned long *text,
                         unsigned long *data, unsigned long *resident)
 {
-       *shared = get_mm_counter_sum(mm, MM_FILEPAGES) +
-                       get_mm_counter_sum(mm, MM_SHMEMPAGES);
+       *shared = get_mm_counter(mm, MM_FILEPAGES) +
+                       get_mm_counter(mm, MM_SHMEMPAGES);
        *text = (PAGE_ALIGN(mm->end_code) - (mm->start_code & PAGE_MASK))
                                                                >> PAGE_SHIFT;
        *data = mm->data_vm + mm->stack_vm;
-       *resident = *shared + get_mm_counter_sum(mm, MM_ANONPAGES);
+       *resident = *shared + get_mm_counter(mm, MM_ANONPAGES);
        return mm->total_vm;
 }
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6f959d8ca4b4..d096bb3593ba 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2847,11 +2847,6 @@ static inline bool get_user_page_fast_only(unsigned long 
addr,
  * per-process(per-mm_struct) statistics.
  */
 static inline unsigned long get_mm_counter(struct mm_struct *mm, int member)
-{
-       return percpu_counter_read_positive(&mm->rss_stat[member]);
-}
-
-static inline unsigned long get_mm_counter_sum(struct mm_struct *mm, int 
member)
 {
        return percpu_counter_sum_positive(&mm->rss_stat[member]);
 }
-- 
2.39.5


Reply via email to