On Thu, Dec 10, 2020 at 11:55:23AM +0800, Muchun Song wrote:
> +hugetlb_free_vmemmap
> +     When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> +     unused vmemmap pages associated each HugeTLB page.
                                      ^^^ with

> -     if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> +     if (is_hugetlb_free_vmemmap_enabled())
> +             err = vmemmap_populate_basepages(start, end, node, NULL);
> +     else if (end - start < PAGES_PER_SECTION * sizeof(struct page))
>               err = vmemmap_populate_basepages(start, end, node, NULL);

Not sure if joining those in an OR makes se.se

>       else if (boot_cpu_has(X86_FEATURE_PSE))
>               err = vmemmap_populate_hugepages(start, end, node, altmap);
> @@ -1610,7 +1613,8 @@ void register_page_bootmem_memmap(unsigned long 
> section_nr,
>               }
>               get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
>  
> -             if (!boot_cpu_has(X86_FEATURE_PSE)) {
> +             if (!boot_cpu_has(X86_FEATURE_PSE) ||
> +                 is_hugetlb_free_vmemmap_enabled()) {

I would add a variable at the beginning called "basepages_populated"
that holds the result of those two conditions.
I am not sure if it slightly improves the code as the conditions do
not need to be rechecked, but the readibility a bit.

> +bool hugetlb_free_vmemmap_enabled;
> +
> +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> +{
> +     if (!buf)
> +             return -EINVAL;
> +
> +     /* We cannot optimize if a "struct page" crosses page boundaries. */

I think this comment belongs to the last patch.


-- 
Oscar Salvador
SUSE L3

Reply via email to