On 2/24/26 20:28, Dave Airlie wrote: > On Tue, 24 Feb 2026 at 17:50, Christian König <[email protected]> > wrote: >> >> On 2/24/26 03:06, Dave Airlie wrote: >>> From: Dave Airlie <[email protected]> >>> >>> This introduces 2 new statistics and 3 new memcontrol APIs for dealing >>> with GPU system memory allocations. >>> >>> The stats corresponds to the same stats in the global vmstat, >>> for number of active GPU pages, and number of pages in pools that >>> can be reclaimed. >>> >>> The first API charges a order of pages to a objcg, and sets >>> the objcg on the pages like kmem does, and updates the active/reclaim >>> statistic. >>> >>> The second API uncharges a page from the obj cgroup it is currently charged >>> to. >>> >>> The third API allows moving a page to/from reclaim and between obj cgroups. >>> When pages are added to the pool lru, this just updates accounting. >>> When pages are being removed from a pool lru, they can be taken from >>> the parent objcg so this allows them to be uncharged from there and >>> transferred >>> to a new child objcg. >>> >>> Acked-by: Christian König <[email protected]> >> >> I have to take that back. >> >> After going over the different use cases I'm now pretty convinced that >> charging any GPU/TTM allocation to memcg is the wrong approach to the >> problem. > > You'll need to sell me a bit more on this idea, I don't hate it, but > it seems to be honest kinda half baked and smells a bit of reachitect > without form, so please start up you writing skills and give me > something concrete here. > >> >> Instead TTM should have a dmem_cgroup_pool which can limit the amount of >> system memory each cgroup can use from GTT. > > This sounds like a static limit though, how would we configure that in > a sane way?
See the discussion about dmem controller for CMA with Mathew, T.J., me and a couple of others. It's on dri-devel and I've CCed you on my latest reply. >> >> The use case that GTT memory should account to memcg is actually only valid >> for an extremely small number of HPC customers and for those use cases we >> have different approaches to solve this issue (udmabuf, system DMA-buf heap, >> etc...). > > Stop, I have a major use case for this that isn't any of those. > Integrated GPUs on Intel and AMD accounting the RAM usage to somewhere > useful, so cgroup mgmt of desktop clients actually work, so when > firefox uses GPU memory it gets accounted to firefox and when the OOM > killer comes along it can choose the correct user. Oh, yes! I have tried multiple times to fix this as well in the last decade or so. > This has been a pain in the ass for desktop for years, and I'd like to > fix it, the HPC use case if purely a driver for me doing the work. Wait a second. How does accounting to cgroups help with that in any way? The last time I looked into this problem the OOM killer worked based on the per task_struct stats which couldn't be influenced this way. Both me and others have tried that approach multiple times and so far it never worked. > Can you give a detailed explanation of how your idea will work in an > unconfigured cgroup environment to help this case? It wouldn't, but I also don't see how this patch set here would. The accounting limits the amount of memory you can allocate per process for each cgroup, but it does not affect the OOM killer score in any way. If we want to fix the OOM killer score we would need to start using the proportional set size in the OOM instead of the resident set size. And that in turn means the changes to the OOM killer and FS layer I already proposed over a decade ago. Otherwise you can always come up with deny of service attacks against centralized services like X or Wayland. >> >> What we can do is to say that this dmem_cgroup_pool then also accounts to >> memcg for selected cgroups. This would not only make it superflous to have >> different flags in drivers and TTM to turn this feature on/off, but also >> allow charging VRAM or other local memory to memcg because they use system >> memory as fallback for device memory. >> >> In other more high level words memcg is actually the swapping space for dmem. > > This is descriptive, but still feels very static, and nothing I've > seen indicated I want this to be a 50% type limit. The initial idea was to have more like a 90% limit by default, so that at least enough memory is left to SSH into the box and kill a run away process. Christian. > > Dave. >>
