Re: [RFC, PATCHv1 24/28] x86/mm: add sync_global_pgds() for configuration with 5-level paging
On Thu, Dec 08, 2016 at 10:42:19AM -0800, Andy Lutomirski wrote: > On Thu, Dec 8, 2016 at 8:21 AM, Kirill A. Shutemov > wrote: > > This basically restores slightly modified version of original > > sync_global_pgds() which we had before foldedl p4d was introduced. > > > > The only modification is protection against 'address' overflow. > > > > Signed-off-by: Kirill A. Shutemov > > --- > > arch/x86/mm/init_64.c | 47 +++ > > 1 file changed, 47 insertions(+) > > > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > > index a991f5c4c2c4..d637893ac8c2 100644 > > --- a/arch/x86/mm/init_64.c > > +++ b/arch/x86/mm/init_64.c > > @@ -92,6 +92,52 @@ __setup("noexec32=", nonx32_setup); > > * When memory was added/removed make sure all the processes MM have > > * suitable PGD entries in the local PGD level page. > > */ > > +#ifdef CONFIG_X86_5LEVEL > > +void sync_global_pgds(unsigned long start, unsigned long end, int removed) > > +{ > > +unsigned long address; > > + > > + for (address = start; address <= end && address >= start; > > + address += PGDIR_SIZE) { > > +const pgd_t *pgd_ref = pgd_offset_k(address); > > +struct page *page; > > + > > +/* > > + * When it is called after memory hot remove, pgd_none() > > + * returns true. In this case (removed == 1), we must clear > > + * the PGD entries in the local PGD level page. > > + */ > > +if (pgd_none(*pgd_ref) && !removed) > > +continue; > > This isn't quite specific to your patch, but can we assert that, if > removed=1, then we're not operating on the vmalloc range? Because if > we do, this will be racy is nasty ways. Looks like there's no users of removed=1. The last user is gone with af2cf278ef4f ("x86/mm/hotplug: Don't remove PGD entries in remove_pagetable()") I'll just drop it (with separate patch). -- Kirill A. Shutemov
Re: [RFC, PATCHv1 24/28] x86/mm: add sync_global_pgds() for configuration with 5-level paging
On Thu, Dec 8, 2016 at 8:21 AM, Kirill A. Shutemov wrote: > This basically restores slightly modified version of original > sync_global_pgds() which we had before foldedl p4d was introduced. > > The only modification is protection against 'address' overflow. > > Signed-off-by: Kirill A. Shutemov > --- > arch/x86/mm/init_64.c | 47 +++ > 1 file changed, 47 insertions(+) > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > index a991f5c4c2c4..d637893ac8c2 100644 > --- a/arch/x86/mm/init_64.c > +++ b/arch/x86/mm/init_64.c > @@ -92,6 +92,52 @@ __setup("noexec32=", nonx32_setup); > * When memory was added/removed make sure all the processes MM have > * suitable PGD entries in the local PGD level page. > */ > +#ifdef CONFIG_X86_5LEVEL > +void sync_global_pgds(unsigned long start, unsigned long end, int removed) > +{ > +unsigned long address; > + > + for (address = start; address <= end && address >= start; > + address += PGDIR_SIZE) { > +const pgd_t *pgd_ref = pgd_offset_k(address); > +struct page *page; > + > +/* > + * When it is called after memory hot remove, pgd_none() > + * returns true. In this case (removed == 1), we must clear > + * the PGD entries in the local PGD level page. > + */ > +if (pgd_none(*pgd_ref) && !removed) > +continue; This isn't quite specific to your patch, but can we assert that, if removed=1, then we're not operating on the vmalloc range? Because if we do, this will be racy is nasty ways.
[RFC, PATCHv1 24/28] x86/mm: add sync_global_pgds() for configuration with 5-level paging
This basically restores slightly modified version of original sync_global_pgds() which we had before foldedl p4d was introduced. The only modification is protection against 'address' overflow. Signed-off-by: Kirill A. Shutemov --- arch/x86/mm/init_64.c | 47 +++ 1 file changed, 47 insertions(+) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index a991f5c4c2c4..d637893ac8c2 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -92,6 +92,52 @@ __setup("noexec32=", nonx32_setup); * When memory was added/removed make sure all the processes MM have * suitable PGD entries in the local PGD level page. */ +#ifdef CONFIG_X86_5LEVEL +void sync_global_pgds(unsigned long start, unsigned long end, int removed) +{ +unsigned long address; + + for (address = start; address <= end && address >= start; + address += PGDIR_SIZE) { +const pgd_t *pgd_ref = pgd_offset_k(address); +struct page *page; + +/* + * When it is called after memory hot remove, pgd_none() + * returns true. In this case (removed == 1), we must clear + * the PGD entries in the local PGD level page. + */ +if (pgd_none(*pgd_ref) && !removed) +continue; + +spin_lock(&pgd_lock); +list_for_each_entry(page, &pgd_list, lru) { +pgd_t *pgd; +spinlock_t *pgt_lock; + +pgd = (pgd_t *)page_address(page) + pgd_index(address); +/* the pgt_lock only for Xen */ +pgt_lock = &pgd_page_get_mm(page)->page_table_lock; +spin_lock(pgt_lock); + +if (!pgd_none(*pgd_ref) && !pgd_none(*pgd)) +BUG_ON(pgd_page_vaddr(*pgd) + != pgd_page_vaddr(*pgd_ref)); + +if (removed) { +if (pgd_none(*pgd_ref) && !pgd_none(*pgd)) +pgd_clear(pgd); +} else { +if (pgd_none(*pgd)) +set_pgd(pgd, *pgd_ref); +} + +spin_unlock(pgt_lock); +} +spin_unlock(&pgd_lock); +} +} +#else void sync_global_pgds(unsigned long start, unsigned long end, int removed) { unsigned long address; @@ -145,6 +191,7 @@ void sync_global_pgds(unsigned long start, unsigned long end, int removed) spin_unlock(&pgd_lock); } } +#endif /* * NOTE: This function is marked __ref because it calls __init function -- 2.10.2