On Thu, 15 Jan 2026 11:24:49 +0000
Steven Price <[email protected]> wrote:

> On 15/01/2026 10:50, Boris Brezillon wrote:
> > Hi Steve,
> > 
> > On Wed, 14 Jan 2026 15:05:36 +0000
> > Steven Price <[email protected]> wrote:
> >   
> >> On 09/01/2026 13:08, Boris Brezillon wrote:  
> [...]
> >>> @@ -1250,10 +1637,42 @@ static struct drm_info_list 
> >>> panthor_gem_debugfs_list[] = {
> >>>   { "gems", panthor_gem_show_bos, 0, NULL },
> >>>  };
> >>>  
> >>> +static int shrink_get(void *data, u64 *val)
> >>> +{
> >>> + struct panthor_device *ptdev =
> >>> +         container_of(data, struct panthor_device, base);
> >>> +
> >>> + *val = ptdev->reclaim.nr_pages_reclaimed_on_last_scan;
> >>> + return 0;
> >>> +}
> >>> +
> >>> +static int shrink_set(void *data, u64 val)
> >>> +{
> >>> + struct panthor_device *ptdev =
> >>> +         container_of(data, struct panthor_device, base);
> >>> + struct shrink_control sc = {
> >>> +         .gfp_mask = GFP_KERNEL,
> >>> +         .nr_to_scan = val,
> >>> + };
> >>> +
> >>> + fs_reclaim_acquire(GFP_KERNEL);
> >>> + if (ptdev->reclaim.shrinker)
> >>> +         panthor_gem_shrinker_scan(ptdev->reclaim.shrinker, &sc);
> >>> + fs_reclaim_release(GFP_KERNEL);
> >>> +
> >>> + return 0;
> >>> +}    
> >>
> >> Do you have some test to drive this?  
> > 
> > Yes, I do [1].
> >   
> >> My immediate thought was that it
> >> would be nice (for manual testing at least) to printk the return value
> >> from panthor_gem_shrinker_scan(). But we probably wouldn't actually need
> >> nr_pages_reclaimed_on_last_scan if you could just read that from dmesg.
> >> But I can see integrating that into a test might not be ideal.  
> > 
> > I basically based the interface on the existing MSM one. Might not be
> > the best, but it was good enough for this initial testing.  
> 
> Ah well if we're matching MSM then that's probably a good justification.
> It just seemed a little odd throwing away the return value and then
> having to have a separate mechanism to get the number of pages reclaimed
> out. And given I'd just spotted a bug in the return value I thought I'd
> ask ;).

For that particular thing, I diverged from what MSM was doing, because
in some cases, SHRINK_STOP is returned even though pages were
reclaimed. I need to double-check the ->scan() semantics, I got that
wrong a few times already, and paged out everything about these tricky
corner cases already.

> >> I have to admit to being very unsure about all of this - I even resorted
> >> to asking AI, which I think has made me more confused ;)  
> > 
> > I think you're right that we shouldn't complain+fail if pin_count > 0
> > in the de-eviction path. As I said above, de-eviction for the CPU is not
> > the same as de-eviction for the GPU, so pages being present and pinned
> > doesn't mean we have nothing to do for the GPU mapping to be restored.
> > Maybe we should come with a better name for this function.  
> 
> Yes the AI was insisting that the problem was the GPU submission would
> fail. Sadly it was incredibly bad at actually explaining the issue.
> 
> So I agree it looks like it would be safe to remove the WARN_ON and keep
> going in the case of pin_count > 0. The (also confusingly named)
> "vm_bo_validate" will be called because the evicted flag is set which
> will get the BO mapped on the GPU again.
> 
> I'm all for better names, but I'm afraid I don't have any suggestions.

Alright, I'll stick to gem_swapin() then.

Reply via email to