On 4/14/25 3:05 PM, Nico Pache wrote:
> Now that we can collapse to mTHPs lets update the admin guide to
> reflect these changes and provide proper guidence on how to utilize it.
> 
> Signed-off-by: Nico Pache <npa...@redhat.com>
> ---
>  Documentation/admin-guide/mm/transhuge.rst | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/mm/transhuge.rst 
> b/Documentation/admin-guide/mm/transhuge.rst
> index dff8d5985f0f..f0d4e78cedaa 100644
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -63,7 +63,7 @@ often.
>  THP can be enabled system wide or restricted to certain tasks or even
>  memory ranges inside task's address space. Unless THP is completely
>  disabled, there is ``khugepaged`` daemon that scans memory and
> -collapses sequences of basic pages into PMD-sized huge pages.
> +collapses sequences of basic pages into huge pages.
>  
>  The THP behaviour is controlled via :ref:`sysfs <thp_sysfs>`
>  interface and using madvise(2) and prctl(2) system calls.
> @@ -144,6 +144,13 @@ hugepage sizes have enabled="never". If enabling 
> multiple hugepage
>  sizes, the kernel will select the most appropriate enabled size for a
>  given allocation.
>  
> +khugepaged uses max_ptes_none scaled to the order of the enabled mTHP size to
> +determine collapses. When using mTHPs its recommended to set max_ptes_none 
> low.

                                         it's

> +Ideally less than HPAGE_PMD_NR / 2 (255 on 4k page size). This will prevent

   ^^^ not a sentence

> +undesired "creep" behavior that leads to continuously collapsing to a larger
> +mTHP size. max_ptes_shared and max_ptes_swap have no effect when collapsing 
> to a
> +mTHP, and mTHP collapse will fail on shared or swapped out pages.
> +
>  It's also possible to limit defrag efforts in the VM to generate
>  anonymous hugepages in case they're not immediately free to madvise
>  regions or to never try to defrag memory and simply fallback to regular

-- 
~Randy


Reply via email to