Currently lock_memory_hotplug()/unlock_memory_hotplug() are used to protect totalram_pages and zone->managed_pages. Other than the memory hotplug driver, totalram_pages and zone->managed_pages may also be modified at runtime by other drivers, such as Xen balloon, virtio_balloon etc. For those cases, memory hotplug lock is a little too heavy, so introduce a dedicated lock to protect totalram_pages and zone->managed_pages.
Now we have a simplified locking rules totalram_pages and zone->managed_pages as: 1) no locking for read accesses because they are unsigned long. 2) no locking for write accesses at boot time in single-threaded context. 3) serialize write accesses at runtime by acquiring the dedicated managed_page_count_lock. Also adjust zone->managed_pages when freeing reserved pages into the buddy system, to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu <jiang....@huawei.com> Cc: Andrew Morton <a...@linux-foundation.org> Cc: Mel Gorman <mgor...@suse.de> Cc: Michel Lespinasse <wal...@google.com> Cc: Rik van Riel <r...@redhat.com> Cc: Minchan Kim <minc...@kernel.org> Cc: linux...@kvack.org (open list:MEMORY MANAGEMENT) Cc: linux-kernel@vger.kernel.org (open list) --- include/linux/mm.h | 6 ++---- include/linux/mmzone.h | 14 ++++++++++---- mm/page_alloc.c | 19 +++++++++++++++++++ 3 files changed, 31 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1f03b0e..da3ffb0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1301,6 +1301,7 @@ extern void free_initmem(void); */ extern unsigned long free_reserved_area(unsigned long start, unsigned long end, int poison, char *s); + #ifdef CONFIG_HIGHMEM /* * Free a highmem page into the buddy system, adjusting totalhigh_pages @@ -1309,10 +1310,7 @@ extern unsigned long free_reserved_area(unsigned long start, unsigned long end, extern void free_highmem_page(struct page *page); #endif -static inline void adjust_managed_page_count(struct page *page, long count) -{ - totalram_pages += count; -} +extern void adjust_managed_page_count(struct page *page, long count); /* Free the reserved page into the buddy system, so it gets managed. */ static inline void __free_reserved_page(struct page *page) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 72e1cb5..dc9c6ca 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -474,10 +474,16 @@ struct zone { * frequently read in proximity to zone->lock. It's good to * give them a chance of being in the same cacheline. * - * Write access to present_pages and managed_pages at runtime should - * be protected by lock_memory_hotplug()/unlock_memory_hotplug(). - * Any reader who can't tolerant drift of present_pages and - * managed_pages should hold memory hotplug lock to get a stable value. + * Write access to present_pages at runtime should be protected by + * lock_memory_hotplug()/unlock_memory_hotplug(). Any reader who can't + * tolerant drift of present_pages should hold memory hotplug lock to + * get a stable value. + * + * Read access to managed_pages should be safe because it's unsigned + * long. Write access to zone->managed_pages and totalram_pages are + * protected by managed_page_count_lock at runtime. Idealy only + * adjust_managed_page_count() should be used instead of directly + * touching zone->managed_pages and totalram_pages. */ unsigned long spanned_pages; unsigned long present_pages; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 45be58c..ca1a6ce 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -100,6 +100,9 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); +/* Protect totalram_pages and zone->managed_pages */ +static DEFINE_SPINLOCK(managed_page_count_lock); + unsigned long totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; /* @@ -5186,6 +5189,22 @@ early_param("movablecore", cmdline_parse_movablecore); #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +void adjust_managed_page_count(struct page *page, long count) +{ + bool lock = (system_state != SYSTEM_BOOTING); + + /* No need to acquire the lock during boot */ + if (lock) + spin_lock(&managed_page_count_lock); + + page_zone(page)->managed_pages += count; + totalram_pages += count; + + if (lock) + spin_unlock(&managed_page_count_lock); +} +EXPORT_SYMBOL(adjust_managed_page_count); + unsigned long free_reserved_area(unsigned long start, unsigned long end, int poison, char *s) { -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/