Re: [PATCH v3 2/2] mm, hwpoison: When copy-on-write hits poison, take page offline

2022-10-28 Thread Miaohe Lin
On 2022/10/29 0:13, Luck, Tony wrote: >>> Cannot call memory_failure() directly from the fault handler because >>> mmap_lock (and others) are held. >> >> Could you please explain which lock makes it unfeasible to call >> memory_failure() directly and >> why? I'm somewhat confused. But I agree

Re: [PATCH v3 2/2] mm, hwpoison: When copy-on-write hits poison, take page offline

2022-10-27 Thread Miaohe Lin
_user_highpage(dst, src, addr, vma)) > + if (copy_mc_user_highpage(dst, src, addr, vma)) { > + memory_failure_queue(page_to_pfn(src), 0); It seems MF_ACTION_REQUIRED is not needed for memory_failure_queue() here. Thanks for your patch. Reviewed-by: Miaohe Lin Thanks, Miaohe Lin

Re: [PATCH v3 1/2] mm, hwpoison: Try to recover from copy-on write faults

2022-10-27 Thread Miaohe Lin
vto = kmap_local_page(to); > + ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE); In copy_user_highpage(), kmsan_unpoison_memory(page_address(to), PAGE_SIZE) is done after the copy when __HAVE_ARCH_COPY_USER_HIGHPAGE isn't defined. Do we need to do something similar here? But I'm not familiar with kmsan, so I can easy be wrong. Anyway, this patch looks good to me. Thanks. Reviewed-by: Miaohe Lin Thanks, Miaohe Lin

Re: [PATCH v2] mm, hwpoison: Try to recover from copy-on write faults

2022-10-20 Thread Miaohe Lin
-- a/mm/memory.c > +++ b/mm/memory.c > @@ -2848,6 +2848,37 @@ static inline int pte_unmap_same(struct vm_fault *vmf) > return same; > } > > +#ifdef CONFIG_MEMORY_FAILURE > +struct pfn_work { > + struct work_struct work; > + unsigned long pfn; > +};

Re: [PATCH v11 04/13] mm/ioremap: rename ioremap_*_range to vmap_*_range

2021-01-27 Thread Miaohe Lin
Hi: On 2021/1/26 12:45, Nicholas Piggin wrote: > This will be used as a generic kernel virtual mapping function, so > re-name it in preparation. > Looks good to me. Thanks. Reviewed-by: Miaohe Lin > Signed-off-by: Nicholas Piggin > --- >

Re: [PATCH v11 02/13] mm: apply_to_pte_range warn and fail if a large pte is encountered

2021-01-26 Thread Miaohe Lin
amp; WARN_ON_ONCE(pgd_bad(*pgd))) { > + if (!create) > + continue; > + pgd_clear_bad(pgd); > + } > + err = apply_to_p4d_range(mm, pgd, addr, next, > + fn, data, create, ); > if (err) > break; > } while (pgd++, addr = next, addr != end); > Looks good to me, thanks. Reviewed-by: Miaohe Lin

Re: [PATCH v11 01/13] mm/vmalloc: fix HUGE_VMAP regression by enabling huge pages in vmalloc_to_page

2021-01-26 Thread Miaohe Lin
e(*pmd) || pmd_bad(*pmd)) > + if (pmd_none(*pmd)) > + return NULL; > + if (pmd_leaf(*pmd)) > + return pmd_page(*pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); > + if (WARN_ON_ONCE(pmd_bad(*pmd))) > return NULL; > >