Hi Boris,

On 05/03/2026 12:43, Boris Brezillon wrote:
> Hello,
> 
> This is an attempt at adding a GEM shrinker to panthor so the system
> can finally reclaim GPU memory.
> 
> This implementation is losely based on the MSM shrinker (which is why
> I added the MSM maintainers in Cc), and it's relying on the drm_gpuvm
> eviction/validation infrastructure.
> 
> I've only done very basic IGT-based [1] and chromium-based (opening
> a lot of tabs on Aquarium until the system starts reclaiming+swaping
> out GPU buffers) testing, but I'm posting this early so I can get
> preliminary feedback on the implementation. If someone knows about
> better tools/ways to test the shrinker, please let me know.

I did a very basic test with glmark and I can reproduce the below splat:

[  290.502999] ------------[ cut here ]------------
[  290.504338] refcount_t: underflow; use-after-free.
[  290.504843] WARNING: lib/refcount.c:28 at refcount_warn_saturate+0xf4/0x144, 
CPU#5: kworker/u32:3/75
[  290.505715] Modules linked in: panthor drm_gpuvm drm_exec gpu_sched
[  290.506402] CPU: 5 UID: 0 PID: 75 Comm: kworker/u32:3 Not tainted 
7.0.0-rc1-00176-g608e8196cd63 #202 PREEMPT 
[  290.507323] Hardware name: Radxa ROCK 5B (DT)
[  290.507733] Workqueue: events_unbound commit_work
[  290.508185] pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[  290.508835] pc : refcount_warn_saturate+0xf4/0x144
[  290.509287] lr : refcount_warn_saturate+0xf4/0x144
[  290.509741] sp : ffff800083cb3b80
[  290.510056] x29: ffff800083cb3b80 x28: ffff8000821d1e88 x27: ffff00010fa058a0
[  290.510724] x26: 0000000000000000 x25: 0000000000000000 x24: 00000000ffffffff
[  290.511398] x23: ffff00010b149000 x22: ffff00010dd3a7c8 x21: ffff80008226c828
[  290.512065] x20: ffff00010dd3a780 x19: ffff00010dd3a780 x18: 00000000ffffffff
[  290.512735] x17: 00000000ffffffff x16: ffff800083cb3668 x15: 0000000000001e00
[  290.513403] x14: ffff000102a8f69f x13: ffff8000821fb558 x12: 000000000000083d
[  290.514074] x11: 00000000000002bf x10: ffff800082253558 x9 : ffff8000821fb558
[  290.514746] x8 : 00000000ffffefff x7 : ffff800082253558 x6 : 80000000fffff000
[  290.515414] x5 : ffff0001fef31588 x4 : 0000000000000000 x3 : ffff80017d1e5000
[  290.516083] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff000102b61980
[  290.516752] Call trace:
[  290.516987]  refcount_warn_saturate+0xf4/0x144 (P)
[  290.517440]  drm_sched_fence_release_scheduled+0xe0/0xe4 [gpu_sched]
[  290.518046]  dma_fence_release+0xb4/0x3cc
[  290.518429]  drm_sched_fence_release_finished+0x94/0xa8 [gpu_sched]
[  290.519021]  dma_fence_release+0xb4/0x3cc
[  290.519401]  dma_fence_array_release+0x94/0x104
[  290.519829]  dma_fence_release+0xb4/0x3cc
[  290.520208]  drm_atomic_helper_wait_for_fences+0x1a4/0x228
[  290.520724]  commit_tail+0x38/0x18c
[  290.521056]  commit_work+0x14/0x20
[  290.521381]  process_one_work+0x208/0x76c
[  290.521763]  worker_thread+0x1c4/0x36c
[  290.522121]  kthread+0x13c/0x148
[  290.522430]  ret_from_fork+0x10/0x20
[  290.522774] irq event stamp: 2167444
[  290.523114] hardirqs last  enabled at (2167443): [<ffff80008016772c>] 
__up_console_sem+0x6c/0x80
[  290.523941] hardirqs last disabled at (2167444): [<ffff80008132977c>] 
el1_brk64+0x20/0x60
[  290.524703] softirqs last  enabled at (2167428): [<ffff8000800c94c4>] 
handle_softirqs+0x604/0x61c
[  290.525528] softirqs last disabled at (2167421): [<ffff8000800102d0>] 
__do_softirq+0x14/0x20
[  290.526320] ---[ end trace 0000000000000000 ]---

I haven't yet dug into what's happening.

Thanks,
Steve

Reply via email to