On Thu, Sep 19, 2019 at 03:41:43PM +0000, Catalin Marinas wrote:
> On Thu, Sep 19, 2019 at 06:00:07PM +0300, Kirill A. Shutemov wrote:
> > On Wed, Sep 18, 2019 at 07:00:30PM +0100, Catalin Marinas wrote:
> > > On Wed, Sep 18, 2019 at 05:00:27PM +0300, Kirill A. Shutemov wrote:
> > > > On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> > > > > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page 
> > > > > *dst, struct page *src, unsigned lo
> > > > >        */
> > > > >       if (unlikely(!src)) {
> > > > >               void *kaddr = kmap_atomic(dst);
> > > > > -             void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > > > +             void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > > > > +             pte_t entry;
> > > > >  
> > > > >               /*
> > > > >                * This really shouldn't fail, because the page is there
> > > > >                * in the page tables. But it might just be unreadable,
> > > > >                * in which case we just give up and fill the result 
> > > > > with
> > > > > -              * zeroes.
> > > > > +              * zeroes. On architectures with software "accessed" 
> > > > > bits,
> > > > > +              * we would take a double page fault here, so mark it
> > > > > +              * accessed here.
> > > > >                */
> > > > > +             if (arch_faults_on_old_pte() && 
> > > > > !pte_young(vmf->orig_pte)) {
> > > > > +                     spin_lock(vmf->ptl);
> > > > > +                     if (likely(pte_same(*vmf->pte, vmf->orig_pte))) 
> > > > > {
> > > > > +                             entry = pte_mkyoung(vmf->orig_pte);
> > > > > +                             if (ptep_set_access_flags(vma, addr,
> > > > > +                                                       vmf->pte, 
> > > > > entry, 0))
> > > > > +                                     update_mmu_cache(vma, addr, 
> > > > > vmf->pte);
> > > > > +                     }
> > > > 
> > > > I don't follow.
> > > > 
> > > > So if pte has changed under you, you don't set the accessed bit, but 
> > > > never
> > > > the less copy from the user.
> > > > 
> > > > What makes you think it will not trigger the same problem?
> > > > 
> > > > I think we need to make cow_user_page() fail in this case and caller --
> > > > wp_page_copy() -- return zero. If the fault was solved by other thread, 
> > > > we
> > > > are fine. If not userspace would re-fault on the same address and we 
> > > > will
> > > > handle the fault from the second attempt.
> > > 
> > > It would be nice to clarify the semantics of this function and do as
> > > you suggest but the current comment is slightly confusing:
> > > 
> > >   /*
> > >    * If the source page was a PFN mapping, we don't have
> > >    * a "struct page" for it. We do a best-effort copy by
> > >    * just copying from the original user address. If that
> > >    * fails, we just zero-fill it. Live with it.
> > >    */
> > > 
> > > Would any user-space rely on getting a zero-filled page here instead of
> > > a recursive fault?
> > 
> > I don't see the point in zero-filled page in this case. SIGBUS sounds like
> > more appropriate response, no?
> 
> I think misunderstood your comment. So, if !pte_same(), we should let
> userspace re-fault. This wouldn't be a user ABI change and it is
> bounded, can't end up in an infinite re-fault loop.

Right. !pte_same() can only happen if we raced with somebody else.
I cannot imagine situation when the race will happen more than few times
in a row.

> In case of a __copy_from_user_inatomic() error, SIGBUS would make more
> sense but it changes the current behaviour (zero-filling the page). This
> can be left for a separate patch, doesn't affect the arm64 case here.

I think it's safer to leave it as is. Maybe put WARN_ON_ONCE() or
something. There can be some obscure use-case that I don't see.

-- 
 Kirill A. Shutemov

Reply via email to