order >= MAX_ORDER pages are only allocated at boot stage using the bootmem allocator with the "hugepages=xxx" option. These pages are never free after boot by default since it would be a one-way street(>= MAX_ORDER pages cannot be allocated later), but if administrator confirm not to use these gigantic pages any more, these pinned pages will waste memory since other users can't grab free pages from gigantic hugetlb pool even if OOM, it's not flexible. The patchset add hugetlb gigantic page pools shrink supporting. Administrator can enable knob exported in sysctl to permit to shrink gigantic hugetlb pool.
Testcase: boot: hugepagesz=1G hugepages=10 [root@localhost hugepages]# free -m total used free shared buffers cached Mem: 36269 10836 25432 0 11 288 -/+ buffers/cache: 10537 25732 Swap: 35999 0 35999 [root@localhost hugepages]# echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages -bash: echo: write error: Invalid argument [root@localhost hugepages]# echo 1 > /proc/sys/vm/hugetlb_shrink_gigantic_pool [root@localhost hugepages]# echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages [root@localhost hugepages]# free -m total used free shared buffers cached Mem: 36269 597 35672 0 11 288 -/+ buffers/cache: 297 35972 Swap: 35999 0 35999 Wanpeng Li (6): introduce new sysctl knob which control gigantic page pools shrinking update_and_free_page gigantic pages awareness enable gigantic hugetlb page pools shrinking use already exist huge_page_order() instead of h->order remove redundant hugetlb_prefault use already exist interface huge_page_shift Documentation/sysctl/vm.txt | 13 +++++++ include/linux/hugetlb.h | 5 +-- kernel/sysctl.c | 7 ++++ mm/hugetlb.c | 83 +++++++++++++++++++++++++++++-------------- mm/internal.h | 1 + mm/page_alloc.c | 2 +- 6 files changed, 82 insertions(+), 29 deletions(-) -- 1.7.10.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/