> -----Original Message-----
> From: Muchun Song [mailto:songmuc...@bytedance.com]
> Sent: Tuesday, November 24, 2020 10:53 PM
> To: cor...@lwn.net; mike.krav...@oracle.com; t...@linutronix.de;
> mi...@redhat.com; b...@alien8.de; x...@kernel.org; h...@zytor.com;
> dave.han...@linux.intel.com; l...@kernel.org; pet...@infradead.org;
> v...@zeniv.linux.org.uk; a...@linux-foundation.org; paul...@kernel.org;
> mchehab+hua...@kernel.org; pawan.kumar.gu...@linux.intel.com;
> rdun...@infradead.org; oneu...@suse.com; anshuman.khand...@arm.com;
> jroe...@suse.de; almasrym...@google.com; rient...@google.com;
> wi...@infradead.org; osalva...@suse.de; mho...@suse.com; Song Bao Hua
> (Barry Song) <song.bao....@hisilicon.com>
> Cc: duanxiongc...@bytedance.com; linux-...@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux...@kvack.org;
> linux-fsde...@vger.kernel.org; Muchun Song <songmuc...@bytedance.com>
> Subject: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter
> hugetlb_free_vmemmap
> 
> Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
> freeing unused vmemmap pages associated with each hugetlb page on boot.
> 
> Signed-off-by: Muchun Song <songmuc...@bytedance.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
>  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
>  mm/hugetlb_vmemmap.c                            | 19
> ++++++++++++++++++-
>  3 files changed, 30 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt
> b/Documentation/admin-guide/kernel-parameters.txt
> index 5debfe238027..d28c3acde965 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -1551,6 +1551,15 @@
>                       Documentation/admin-guide/mm/hugetlbpage.rst.
>                       Format: size[KMG]
> 
> +     hugetlb_free_vmemmap=
> +                     [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> +                     this controls freeing unused vmemmap pages associated
> +                     with each HugeTLB page.
> +                     Format: { on | off (default) }
> +
> +                     on:  enable the feature
> +                     off: disable the feature
> +

We've a parameter here. but wouldn't it be applied to "x86/mm/64/:disable
Pmd page mapping of vmemmap" as well?
If (hugetlb_free_vmemmap_enabled)
        Do Basepage mapping?

>       hung_task_panic=
>                       [KNL] Should the hung task detector generate panics.
>                       Format: 0 | 1
> diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst
> b/Documentation/admin-guide/mm/hugetlbpage.rst
> index f7b1c7462991..6a8b57f6d3b7 100644
> --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> @@ -145,6 +145,9 @@ default_hugepagesz
> 
>       will all result in 256 2M huge pages being allocated.  Valid default
>       huge page size is architecture dependent.
> +hugetlb_free_vmemmap
> +     When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables
> freeing
> +     unused vmemmap pages associated each HugeTLB page.
> 
>  When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
>  indicates the current number of pre-allocated huge pages of the default size.
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 509ca451e232..b2222f8d1245 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -131,6 +131,22 @@ typedef void (*vmemmap_pte_remap_func_t)(struct
> page *reuse, pte_t *ptep,
>                                        unsigned long start, unsigned long end,
>                                        void *priv);
> 
> +static bool hugetlb_free_vmemmap_enabled __initdata;
> +
> +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> +{
> +     if (!buf)
> +             return -EINVAL;
> +
> +     if (!strcmp(buf, "on"))
> +             hugetlb_free_vmemmap_enabled = true;
> +     else if (strcmp(buf, "off"))
> +             return -EINVAL;
> +
> +     return 0;
> +}
> +early_param("hugetlb_free_vmemmap",
> early_hugetlb_free_vmemmap_param);
> +
>  static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
>  {
>       return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
> @@ -322,7 +338,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>       unsigned int order = huge_page_order(h);
>       unsigned int vmemmap_pages;
> 
> -     if (!is_power_of_2(sizeof(struct page))) {
> +     if (!is_power_of_2(sizeof(struct page)) ||
> +         !hugetlb_free_vmemmap_enabled) {
>               pr_info("disable freeing vmemmap pages for %s\n", h->name);
>               return;
>       }
> --
> 2.11.0

Thanks
Barry

Reply via email to