On Thu, Dec 13, 2018 at 8:15 AM Peter Xu <pet...@redhat.com> wrote: > > When splitting a huge migrating PMD, we'll transfer all the existing > PMD bits and apply them again onto the small PTEs. However we are > fetching the bits unconditionally via pmd_soft_dirty(), pmd_write() > or pmd_yound() while actually they don't make sense at all when it's > a migration entry. Fix them up. Since at it, drop the ifdef together > as not needed. > > Note that if my understanding is correct about the problem then if > without the patch there is chance to lose some of the dirty bits in > the migrating pmd pages (on x86_64 we're fetching bit 11 which is part > of swap offset instead of bit 2) and it could potentially corrupt the > memory of an userspace program which depends on the dirty bit. >
Looks good to me Reviewed-by: Konstantin Khlebnikov <khlebni...@yandex-team.ru> > CC: Andrea Arcangeli <aarca...@redhat.com> > CC: Andrew Morton <a...@linux-foundation.org> > CC: "Kirill A. Shutemov" <kirill.shute...@linux.intel.com> > CC: Matthew Wilcox <wi...@infradead.org> > CC: Michal Hocko <mho...@suse.com> > CC: Dave Jiang <dave.ji...@intel.com> > CC: "Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com> > CC: Souptick Joarder <jrdr.li...@gmail.com> > CC: Konstantin Khlebnikov <khlebni...@yandex-team.ru> > CC: Zi Yan <zi....@cs.rutgers.edu> > CC: linux...@kvack.org > CC: linux-kernel@vger.kernel.org > Signed-off-by: Peter Xu <pet...@redhat.com> > --- > v2: > - fix it up for young/write/dirty bits too [Konstantin] > v3: > - fetch write correctly for migration entry; drop macro [Konstantin] > --- > mm/huge_memory.c | 20 +++++++++++--------- > 1 file changed, 11 insertions(+), 9 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index f2d19e4fe854..aebade83cec9 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2145,23 +2145,25 @@ static void __split_huge_pmd_locked(struct > vm_area_struct *vma, pmd_t *pmd, > */ > old_pmd = pmdp_invalidate(vma, haddr, pmd); > > -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > pmd_migration = is_pmd_migration_entry(old_pmd); > - if (pmd_migration) { > + if (unlikely(pmd_migration)) { > swp_entry_t entry; > > entry = pmd_to_swp_entry(old_pmd); > page = pfn_to_page(swp_offset(entry)); > - } else > -#endif > + write = is_write_migration_entry(entry); > + young = false; > + soft_dirty = pmd_swp_soft_dirty(old_pmd); > + } else { > page = pmd_page(old_pmd); > + if (pmd_dirty(old_pmd)) > + SetPageDirty(page); > + write = pmd_write(old_pmd); > + young = pmd_young(old_pmd); > + soft_dirty = pmd_soft_dirty(old_pmd); > + } > VM_BUG_ON_PAGE(!page_count(page), page); > page_ref_add(page, HPAGE_PMD_NR - 1); > - if (pmd_dirty(old_pmd)) > - SetPageDirty(page); > - write = pmd_write(old_pmd); > - young = pmd_young(old_pmd); > - soft_dirty = pmd_soft_dirty(old_pmd); > > /* > * Withdraw the table only after we mark the pmd entry invalid. > -- > 2.17.1 >