On Mon, Nov 10, 2025 at 05:17:13PM +0100, Vlastimil Babka wrote:
> On 11/7/25 17:11, Lorenzo Stoakes wrote:
> > Now we have established the VM_MAYBE_GUARD flag and added the capacity to
> > set it atomically, do so upon MADV_GUARD_INSTALL.
> >
> > The places where this flag is used currently and matter are:
> >
> > * VMA merge - performed under mmap/VMA write lock, therefore excluding
> >   racing writes.
> >
> > * /proc/$pid/smaps - can race the write, however this isn't meaningful as
> >   the flag write is performed at the point of the guard region being
> >   established, and thus an smaps reader can't reasonably expect to avoid
> >   races. Due to atomicity, a reader will observe either the flag being set
> >   or not. Therefore consistency will be maintained.
> >
> > In all other cases the flag being set is irrelevant and atomicity
> > guarantees other flags will be read correctly.
> >
> > Note that non-atomic updates of unrelated flags do not cause an issue with
> > this flag being set atomically, as writes of other flags are performed
> > under mmap/VMA write lock, and these atomic writes are performed under
> > mmap/VMA read lock, which excludes the write, avoiding RMW races.
> >
> > Note that we do not encounter issues with KCSAN by adjusting this flag
> > atomically, as we are only updating a single bit in the flag bitmap and
> > therefore we do not need to annotate these changes.
> >
> > We intentionally set this flag in advance of actually updating the page
> > tables, to ensure that any racing atomic read of this flag will only return
> > false prior to page tables being updated, to allow for serialisation via
> > page table locks.
> >
> > Note that we set vma->anon_vma for anonymous mappings. This is because the
> > expectation for anonymous mappings is that an anon_vma is established
> > should they possess any page table mappings. This is also consistent with
> > what we were doing prior to this patch (unconditionally setting anon_vma on
> > guard region installation).
> >
> > We also need to update retract_page_tables() to ensure that madvise(...,
> > MADV_COLLAPSE) doesn't incorrectly collapse file-backed ranges contain
> > guard regions.
> >
> > This was previously guarded by anon_vma being set to catch MAP_PRIVATE
> > cases, but the introduction of VM_MAYBE_GUARD necessitates that we check
> > this flag instead.
> >
> > We utilise vma_flag_test_atomic() to do so - we first perform an optimistic
> > check, then after the PTE page table lock is held, we can check again
> > safely, as upon guard marker install the flag is set atomically prior to
> > the page table lock being taken to actually apply it.
> >
> > So if the initial check fails either:
> >
> > * Page table retraction acquires page table lock prior to VM_MAYBE_GUARD
> >   being set - guard marker installation will be blocked until page table
> >   retraction is complete.
> >
> > OR:
> >
> > * Guard marker installation acquires page table lock after setting
> >   VM_MAYBE_GUARD, which raced and didn't pick this up in the initial
> >   optimistic check, blocking page table retraction until the guard regions
> >   are installed - the second VM_MAYBE_GUARD check will prevent page table
> >   retraction.
> >
> > Either way we're safe.
> >
> > We refactor the retraction checks into a single
> > file_backed_vma_is_retractable(), there doesn't seem to be any reason that
> > the checks were separated as before.
> >
> > Note that VM_MAYBE_GUARD being set atomically remains correct as
> > vma_needs_copy() is invoked with the mmap and VMA write locks held,
> > excluding any race with madvise_guard_install().
> >
> > Signed-off-by: Lorenzo Stoakes <[email protected]>
>
> Reviewed-by: Vlastimil Babka <[email protected]>

Thanks

>
> Small nit below:
>
> > @@ -1778,15 +1801,16 @@ static void retract_page_tables(struct 
> > address_space *mapping, pgoff_t pgoff)
> >                     spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
> >
> >             /*
> > -            * Huge page lock is still held, so normally the page table
> > -            * must remain empty; and we have already skipped anon_vma
> > -            * and userfaultfd_wp() vmas.  But since the mmap_lock is not
> > -            * held, it is still possible for a racing userfaultfd_ioctl()
> > -            * to have inserted ptes or markers.  Now that we hold ptlock,
> > -            * repeating the anon_vma check protects from one category,
> > -            * and repeating the userfaultfd_wp() check from another.
> > +            * Huge page lock is still held, so normally the page table must
> > +            * remain empty; and we have already skipped anon_vma and
> > +            * userfaultfd_wp() vmas.  But since the mmap_lock is not held,
> > +            * it is still possible for a racing userfaultfd_ioctl() or
> > +            * madvise() to have inserted ptes or markers.  Now that we hold
> > +            * ptlock, repeating the anon_vma check protects from one
> > +            * category, and repeating the userfaultfd_wp() check from
> > +            * another.
>
> The last part of the comment is unchanged and mentions anon_vma check and
> userfaultfd_wp() check which were there explicitly originally, but now it's
> a file_backed_vma_is_retractable() check that also includes the guard region
> check, so maybe could be updated?

OK will send fix-patch.

>
> >              */
> > -           if (likely(!vma->anon_vma && !userfaultfd_wp(vma))) {
> > +           if (likely(file_backed_vma_is_retractable(vma))) {
> >                     pgt_pmd = pmdp_collapse_flush(vma, addr, pmd);
> >                     pmdp_get_lockless_sync();
> >                     success = true;
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 67bdfcb315b3..de918b107cfc 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -1139,15 +1139,21 @@ static long madvise_guard_install(struct 
> > madvise_behavior *madv_behavior)
> >             return -EINVAL;
> >
> >     /*
> > -    * If we install guard markers, then the range is no longer
> > -    * empty from a page table perspective and therefore it's
> > -    * appropriate to have an anon_vma.
> > -    *
> > -    * This ensures that on fork, we copy page tables correctly.
> > +    * Set atomically under read lock. All pertinent readers will need to
> > +    * acquire an mmap/VMA write lock to read it. All remaining readers may
> > +    * or may not see the flag set, but we don't care.
> > +    */
> > +   vma_flag_set_atomic(vma, VM_MAYBE_GUARD_BIT);
> > +
> > +   /*
> > +    * If anonymous and we are establishing page tables the VMA ought to
> > +    * have an anon_vma associated with it.
> >      */
> > -   err = anon_vma_prepare(vma);
> > -   if (err)
> > -           return err;
> > +   if (vma_is_anonymous(vma)) {
> > +           err = anon_vma_prepare(vma);
> > +           if (err)
> > +                   return err;
> > +   }
> >
> >     /*
> >      * Optimistically try to install the guard marker pages first. If any
>

Reply via email to