On Tue, May 12, 2026 at 2:28 AM David Hildenbrand (Arm)
<[email protected]> wrote:
>
> On 4/26/26 08:27, Suren Baghdasaryan wrote:
> > Use per-vma locks when reading /proc/pid/smaps and /proc/pid/numa_maps
> > similar to /proc/pid/maps to reduce contention on central mmap_lock. One
> > major difference between maps and smaps/numa_maps reading is that the
> > latter executes page table walk which can't be done under RCU due to a
> > possibility of sleeping. Therefore we drop RCU read lock before this walk
> > while keeping the VMA locked. After the walk we retake RCU read lock,
> > reset VMA iterator and proceed with the next VMA.
>
> With many small VMAs, is that overhead noticable?

It might be but the point of this patchset (and the previous one that
made a similar change for /proc/pid/maps) is to reduce mmap_lock
contention, not to speed up the read operation, which is not a
performance critical part. The original problem that Paul McKenney
described and which kicked these series of changes is that a
low-priority monitoring process reading /proc/pid/{maps|smaps|...} can
block a high-priority updates by holding the mmap_lock. You can see
details about this problem and the numbers Paul obtained with the
previous change in here:
https://lore.kernel.org/all/[email protected]/

>
> --
> Cheers,
>
> David

Reply via email to