On 15.09.25 15:43, Johannes Weiner wrote:
On Fri, Sep 12, 2025 at 03:46:36PM +0200, David Hildenbrand wrote:
On 12.09.25 15:37, Johannes Weiner wrote:
On Fri, Sep 12, 2025 at 02:25:31PM +0200, David Hildenbrand wrote:
On 12.09.25 14:19, Kiryl Shutsemau wrote:
On Thu, Sep 11, 2025 at 09:27:55PM -0600, Nico Pache wrote:
The following series provides khugepaged with the capability to collapse
anonymous memory regions to mTHPs.

To achieve this we generalize the khugepaged functions to no longer depend
on PMD_ORDER. Then during the PMD scan, we use a bitmap to track individual
pages that are occupied (!none/zero). After the PMD scan is done, we do
binary recursion on the bitmap to find the optimal mTHP sizes for the PMD
range. The restriction on max_ptes_none is removed during the scan, to make
sure we account for the whole PMD range. When no mTHP size is enabled, the
legacy behavior of khugepaged is maintained. max_ptes_none will be scaled
by the attempted collapse order to determine how full a mTHP must be to be
eligible for the collapse to occur. If a mTHP collapse is attempted, but
contains swapped out, or shared pages, we don't perform the collapse. It is
now also possible to collapse to mTHPs without requiring the PMD THP size
to be enabled.

When enabling (m)THP sizes, if max_ptes_none >= HPAGE_PMD_NR/2 (255 on
4K page size), it will be automatically capped to HPAGE_PMD_NR/2 - 1 for
mTHP collapses to prevent collapse "creep" behavior. This prevents
constantly promoting mTHPs to the next available size, which would occur
because a collapse introduces more non-zero pages that would satisfy the
promotion condition on subsequent scans.

Hm. Maybe instead of capping at HPAGE_PMD_NR/2 - 1 we can count
all-zeros 4k as none_or_zero? It mirrors the logic of shrinker.


I am all for not adding any more ugliness on top of all the ugliness we
added in the past.

I will soon propose deprecating that parameter in favor of something
that makes a bit more sense.

In essence, we'll likely have an "eagerness" parameter that ranges from
0 to 10. 10 is essentially "always collapse" and 0 "never collapse if
not all is populated".

In between we will have more flexibility on how to set these values.

Likely 9 will be around 50% to not even motivate the user to set
something that does not make sense (creep).

One observation we've had from production experiments is that the
optimal number here isn't static. If you have plenty of memory, then
even very sparse THPs are beneficial.

Exactly.

And willy suggested something like "eagerness" similar to "swapinness"
that gives us more flexibility when implementing it, including
dynamically adjusting the values in the future.

I think we talked past each other a bit here. The point I was trying
to make is that the optimal behavior depends on the pressure situation
inside the kernel; it's fundamentally not something userspace can make
informed choices about.

I don't think the "no tunable at all" approach solely based on pressure will be workable in the foreseeable future.

Collapsing 2 pages to 2 MiB THP all over the system just to split it immediately again is not something particularly helpful.

So long term I assume the eagerness will work together with memory pressure and probably some other inputs.


So for max_ptes_none, the approach is basically: try a few settings
and see which one performs best. Okay, not great. But wouldn't that be
the same for an eagerness setting? What would be the mental model for
the user when configuring this? If it's the same empirical approach,
then the new knob would seem like a lateral move.

Consider it a replacement for something that is oddly PMD specific and requires you to punch in magical values (e.g., 511 on x86, 2047 on arm64 64k).

Initially I thought about just using a percentage/scale of (m)THP but Willy argued that something more abstract gives us more wiggle room.

Yes, for some workloads you will likely still have to fine tune parameters (honestly, I don't think many companies besides Meta are doing that), but the idea is to evolve it over time to something that is smarter than punching in magic values into an obscure interface.


It would also be difficult to change the implementation without
risking regressions once production systems are tuned to the old
behavior.

Companies like Meta that do such a level of fine-tuning probably use the old nasty interface because they know exactly what they are doing.

That is a corner case, though.

--
Cheers

David / dhildenb


Reply via email to