[ adding -stable ] The patch below is upstream as commit fc5f9d5f151c "x86/mm: Fix boot crash caused by incorrect loop count calculation in sync_global_pgds()". The referenced bug potentially affects all kaslr enabled kernels with > 512GB of memory. Please apply this patch to all current -stable kernels.
On Fri, May 5, 2017 at 1:11 AM, tip-bot for Baoquan He <[email protected]> wrote: > Commit-ID: fc5f9d5f151c9fff21d3d1d2907b888a5aec3ff7 > Gitweb: http://git.kernel.org/tip/fc5f9d5f151c9fff21d3d1d2907b888a5aec3ff7 > Author: Baoquan He <[email protected]> > AuthorDate: Thu, 4 May 2017 10:25:47 +0800 > Committer: Ingo Molnar <[email protected]> > CommitDate: Fri, 5 May 2017 08:21:24 +0200 > > x86/mm: Fix boot crash caused by incorrect loop count calculation in > sync_global_pgds() > > Jeff Moyer reported that on his system with two memory regions 0~64G and > 1T~1T+192G, and kernel option "memmap=192G!1024G" added, enabling KASLR > will make the system hang intermittently during boot. While adding 'nokaslr' > won't. > > The back trace is: > > Oops: 0000 [#1] SMP > > RIP: memcpy_erms() > [ .... ] > Call Trace: > pmem_rw_page() > bdev_read_page() > do_mpage_readpage() > mpage_readpages() > blkdev_readpages() > __do_page_cache_readahead() > force_page_cache_readahead() > page_cache_sync_readahead() > generic_file_read_iter() > blkdev_read_iter() > __vfs_read() > vfs_read() > SyS_read() > entry_SYSCALL_64_fastpath() > > This crash happens because the for loop count calculation in > sync_global_pgds() > is not correct. When a mapping area crosses PGD entries, we should > calculate the starting address of region which next PGD covers and assign > it to next for loop count, but not add PGDIR_SIZE directly. The old > code works right only if the mapping area is an exact multiple of PGDIR_SIZE, > otherwize the end region could be skipped so that it can't be synchronized > to all other processes from kernel PGD init_mm.pgd. > > In Jeff's system, emulated pmem area [1024G, 1216G) is smaller than > PGDIR_SIZE. While 'nokaslr' works because PAGE_OFFSET is 1T aligned, it > makes this area be mapped inside one PGD entry. With KASLR enabled, > this area could cross two PGD entries, then the next PGD entry won't > be synced to all other processes. That is why we saw empty PGD. > > Fix it. > > Reported-by: Jeff Moyer <[email protected]> > Signed-off-by: Baoquan He <[email protected]> > Cc: Andrew Morton <[email protected]> > Cc: Andy Lutomirski <[email protected]> > Cc: Borislav Petkov <[email protected]> > Cc: Brian Gerst <[email protected]> > Cc: Dan Williams <[email protected]> > Cc: Dave Hansen <[email protected]> > Cc: Dave Young <[email protected]> > Cc: Denys Vlasenko <[email protected]> > Cc: H. Peter Anvin <[email protected]> > Cc: Jinbum Park <[email protected]> > Cc: Josh Poimboeuf <[email protected]> > Cc: Kees Cook <[email protected]> > Cc: Kirill A. Shutemov <[email protected]> > Cc: Linus Torvalds <[email protected]> > Cc: Peter Zijlstra <[email protected]> > Cc: Thomas Garnier <[email protected]> > Cc: Thomas Gleixner <[email protected]> > Cc: Yasuaki Ishimatsu <[email protected]> > Cc: Yinghai Lu <[email protected]> > Link: http://lkml.kernel.org/r/[email protected] > Signed-off-by: Ingo Molnar <[email protected]> > --- > arch/x86/mm/init_64.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > index 745e5e1..97fe887 100644 > --- a/arch/x86/mm/init_64.c > +++ b/arch/x86/mm/init_64.c > @@ -94,10 +94,10 @@ __setup("noexec32=", nonx32_setup); > */ > void sync_global_pgds(unsigned long start, unsigned long end) > { > - unsigned long address; > + unsigned long addr; > > - for (address = start; address <= end; address += PGDIR_SIZE) { > - pgd_t *pgd_ref = pgd_offset_k(address); > + for (addr = start; addr <= end; addr = ALIGN(addr + 1, PGDIR_SIZE)) { > + pgd_t *pgd_ref = pgd_offset_k(addr); > const p4d_t *p4d_ref; > struct page *page; > > @@ -106,7 +106,7 @@ void sync_global_pgds(unsigned long start, unsigned long > end) > * handle synchonization on p4d level. > */ > BUILD_BUG_ON(pgd_none(*pgd_ref)); > - p4d_ref = p4d_offset(pgd_ref, address); > + p4d_ref = p4d_offset(pgd_ref, addr); > > if (p4d_none(*p4d_ref)) > continue; > @@ -117,8 +117,8 @@ void sync_global_pgds(unsigned long start, unsigned long > end) > p4d_t *p4d; > spinlock_t *pgt_lock; > > - pgd = (pgd_t *)page_address(page) + > pgd_index(address); > - p4d = p4d_offset(pgd, address); > + pgd = (pgd_t *)page_address(page) + pgd_index(addr); > + p4d = p4d_offset(pgd, addr); > /* the pgt_lock only for Xen */ > pgt_lock = &pgd_page_get_mm(page)->page_table_lock; > spin_lock(pgt_lock);

