Just wanting to follow up post-conference with a few major take-aways
since I will be a bit sparse during May / Early June (so want to not
forget, and garner a bit of input on the notes).
If you just want the tl;dr:
0) naming: private -> managed
1) remove global general "possible" and "online" node lists
2) add consistency with "normal" nodes, by opting them all in
to all the new things, and just making that the new normal.
e.g.: node_is_private_managed -> node_is_lru_eligible
3) Have __init add init time nodes to all the lists
Otherwise service/owner must add/enable services.
4) Make folio checks just way more explicit per service
e.g.: folio_is_private_managed -> folio_is_ksm_eligible
5) I still think w/o __GFP_PRIVATE this will still be too fragile,
but we're going to give it a try.
6) No callbacks in the MVP
7) MVP will be, essentially, Buddy + MBind support
Otherwise, more notes below.
~Gregory
<wall of text>
0) Naming is hard. Willy and Liam expressed concern over "private".
We briefly discussed "Managed"
This results in the following changes:
- if (folio_is_zone_device(folio))
+ if (folio_is_managed(folio))
and
+ if (node_is_managed(nid))
and
- N_MEMORY_PRIVATE
+ N_MEMORY_MANAGED
I'm less enthused the last one, but i'm ok with it.
1) There is a desire to fix possible / online node masks to avoid
bad patterns, and maybe to audit existing nodemask users.
there's one UAPI issue with this, and that that these masks
are exposed to userland by nature of existing node attributes
(N_MEMORY, N_CPU, N_POSSIBLE, etc).
I'm considering a name change from `possible` -> `init`, because
that's mostly how it is used (initialize some set of per-node
resources during __init, not at runtime). Externally, this set
would still be reported to uapi as possible.
2) There was concern about inconsistency towards nodes.
Along the lines of #1 - I'm thinking about actually adding explicit
service nodelists, which are populated at boot by __init, and by
hotplug if it's a general purpose node.
So we'd end up with things like:
for_each_ksm_node
for_each_lru_node
for_each_x_node
And we would retire such general defines like
for_each_node
for_each_online_node
For any "normal" node, it lands in all the lists.
For the buddy, we would have
for_buddy_node
For the default buddy-node list, and otherwise "managed" nodes would
still be removed from the standard fallback lists.
This means these nodes cannot be reached via nodemask arguments, and
can only be reached by `alloc_pages_node(nid, ...)` nid argument.
I *think* might resolve __GFP_PRIVATE.
But it's still dependent on system-wide for_each good behavior.
3) How do private nodes get into the lists in the new system?
For any private node, the registering driver (owner) and the managing
service are responsible for adding/removing the nodes from the list.
Example workflow:
0) CXL driver hotplug: add_memory_driver_managed(..., nid, owner)
a) owner=NULL means general purpose node
b) otherwise, reserve nid and (pgdat->owner = owner)
1) hotplug memory onto the node
a) if node is normal, add to all service lists
b) if node is "managed" (private), omit from all lists
2) CXL driver registers node with specific services, e.g.:
cram_register_node(..., nid, owner);
3) Service sets node enabled in appropriate node list, and starts
any appropriate services (kswapd, kcompactd, etc) for that node.
In some cases, nodes would have individual mappings onto services
(cram), in other cases the intent would be to have the memory
otherwise treated as general-purpose, but with special access
patterns (e.g. an LRU node not marked N_MEMORY).
4) There are still concerns about random hooks around the kernel.
My thought is to make this less "random", and more a change
in the way we think about folio operations / node operations
for ALL nodes.
ZONE_DEVICE has a bunch of implicit filtering due to not being
on the LRU - but the intent is to allow flexible LRU membership.
So what if we just made these checks much more explict overall
if (folio_is_ksm_eligible(folio)) /* can be merged */
if (folio_is_lru_eligible(folio)) /* managed by lru services */
if (folio_is_demotion_eligible(folio)) /* demotion target */
if (folio_is_mbind_eligible(folio)) /* can be an mbind target */
Rather than rathole over what the set of bits should be, i think it's
more important to determine what the actual operation here will be.
right now I have this defined as essentially:
folio_pgdat(folio)->private.ops.mask & NP_OPT_KSM
But if we generalize to all nodes / all features, it's essentially
a per-pgdat bitmask lookup:
bool folio_is_ksm_eligible(folio)) {
return test_bit(N_FEATURE_KSM, folio_pgdat(folio)->features);
}
With the bonus that all ZONE_DEVICE hooks can be sunk into these
checks, so there are many places in mm/ where this becomes essentially
a single-line change.
5) Lacking __GFP_PRIVATE, I have concern over fragility.
Previously, __GFP_PRIVATE created a "default opt-out" mechanism.
I *think* the above nodelist changes, specifically removing:
for_each_node()
for_each_online_node()
for_each_node_with_cpus()
The problem I foresee is with existing node_state masks, like
node_state((node), N_POSSIBLE)
node_state((node), N_CPU)
This might be tractable, but it may also simply be too fragile.
Right now only 3 or 4 locations use node_state() outside mm/, and
I'm tempted to try to sink these into mm/internal.h instead of
include/linux/nodemask.h. If that becomes unpalletable, then I will
lobby for __GFP_PRIVATE again (I may still anyway :P).
6) No callbacks by default, but nothing technically prevents it.
I was already in the process of killing this. I think mmu_notifier
does *most* of what the callbacks where doing anyway, so we can
probably collapse that.
7) David asked me to limit the MVP to Buddy + MBind support.
There's some odd interactions with pagecache, so that might evolve
too (may not be able to reliably fault a file directly onto a private
node, tbd - mempolicy does not apply to page cache faults, so it's
just unreliable).
</wall of text>
~Gregory