On Mon, Oct 05, 2020 at 05:24:19PM +0800, Hillf Danton wrote:
> 
> On Sun,  4 Oct 2020 12:21:45
> > From: Rob Clark <robdcl...@chromium.org>
> > 
> > Now that the inactive_list is protected by mm_lock, and everything
> > else on per-obj basis is protected by obj->lock, we no longer depend
> > on struct_mutex.
> > 
> > Signed-off-by: Rob Clark <robdcl...@chromium.org>
> > ---
> >  drivers/gpu/drm/msm/msm_gem.c          |  1 -
> >  drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 --------------------------
> >  2 files changed, 55 deletions(-)
> > 
> [...]
> 
> > @@ -71,13 +33,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct 
> > shrink_control *sc)
> >  {
> >     struct msm_drm_private *priv =
> >             container_of(shrinker, struct msm_drm_private, shrinker);
> > -   struct drm_device *dev = priv->dev;
> >     struct msm_gem_object *msm_obj;
> >     unsigned long freed = 0;
> > -   bool unlock;
> > -
> > -   if (!msm_gem_shrinker_lock(dev, &unlock))
> > -           return SHRINK_STOP;
> >  
> >     mutex_lock(&priv->mm_lock);
> 
> Better if the change in behavior is documented that SHRINK_STOP will
> no longer be needed.

btw I read through this and noticed you have your own obj lock, plus
mutex_lock_nested. I strongly recommend to just cut over to dma_resv_lock
for all object lock needs (soc drivers have been terrible with this
unfortuntaly), and in the shrinker just use dma_resv_trylock instead of
trying to play clever games outsmarting lockdep.

I recently wrote an entire blog length rant on why I think
mutex_lock_nested is too dangerous to be useful:

https://blog.ffwll.ch/2020/08/lockdep-false-positives.html

Not anything about this here, just general comment. The problem extends to
shmem helpers and all that also having their own locks for everything.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to