Hi Catalin > -----Original Message----- > From: Catalin Marinas <[email protected]> > Sent: 2019年9月20日 0:42 > To: Justin He (Arm Technology China) <[email protected]> > Cc: Will Deacon <[email protected]>; Mark Rutland > <[email protected]>; James Morse <[email protected]>; Marc > Zyngier <[email protected]>; Matthew Wilcox <[email protected]>; Kirill A. > Shutemov <[email protected]>; linux-arm- > [email protected]; [email protected]; linux- > [email protected]; Suzuki Poulose <[email protected]>; Punit > Agrawal <[email protected]>; Anshuman Khandual > <[email protected]>; Alex Van Brunt > <[email protected]>; Robin Murphy <[email protected]>; > Thomas Gleixner <[email protected]>; Andrew Morton <akpm@linux- > foundation.org>; Jérôme Glisse <[email protected]>; Ralph Campbell > <[email protected]>; [email protected]; Kaly Xin (Arm Technology > China) <[email protected]> > Subject: Re: [PATCH v5 3/3] mm: fix double page fault on arm64 if PTE_AF > is cleared > > On Fri, Sep 20, 2019 at 12:12:04AM +0800, Jia He wrote: > > @@ -2152,7 +2163,29 @@ static inline void cow_user_page(struct page > *dst, struct page *src, unsigned lo > > */ > > if (unlikely(!src)) { > > void *kaddr = kmap_atomic(dst); > > - void __user *uaddr = (void __user *)(va & PAGE_MASK); > > + void __user *uaddr = (void __user *)(addr & PAGE_MASK); > > + pte_t entry; > > + > > + /* On architectures with software "accessed" bits, we would > > + * take a double page fault, so mark it accessed here. > > + */ > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) > { > > + spin_lock(vmf->ptl); > > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) { > > + entry = pte_mkyoung(vmf->orig_pte); > > + if (ptep_set_access_flags(vma, addr, > > + vmf->pte, entry, 0)) > > + update_mmu_cache(vma, addr, vmf- > >pte); > > + } else { > > + /* Other thread has already handled the > fault > > + * and we don't need to do anything. If it's > > + * not the case, the fault will be triggered > > + * again on the same address. > > + */ > > + return -1; > > + } > > + spin_unlock(vmf->ptl); > > Returning with the spinlock held doesn't normally go very well ;). Yes, my bad. Will fix asap
-- Cheers, Justin (Jia He) IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

