On Thu, Jun 10, 2021 at 10:35 AM Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com> wrote: > To avoid a race between rmap walk and mremap, mremap does take_rmap_locks(). > The lock was taken to ensure that rmap walk don't miss a page table entry due > to > PTE moves via move_pagetables(). The kernel does further optimization of > this lock such that if we are going to find the newly added vma after the > old vma, the rmap lock is not taken. This is because rmap walk would find the > vmas in the same order and if we don't find the page table attached to > older vma we would find it with the new vma which we would iterate later. [...] > Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") > Fixes: c49dd3401802 ("mm: speedup mremap on 1GB or larger regions")
probably also "Cc: sta...@vger.kernel.org"?