On Fri, Feb 28, 2020 at 04:10:42PM +0200, Ville Syrjälä wrote:
> On Fri, Feb 28, 2020 at 01:10:45PM +0200, Ville Syrjälä wrote:
> > On Fri, Feb 28, 2020 at 11:06:41AM +0200, Jani Nikula wrote:
> > > On Thu, 27 Feb 2020, Ville Syrjala <ville.syrj...@linux.intel.com> wrote:
> > > > From: Ville Syrjälä <ville.syrj...@linux.intel.com>
> > > >
> > > > gmbus/aux may be clocked by cdclk, thus we should make sure no
> > > > transfers are ongoing while the cdclk frequency is being changed.
> > > > We do that by simply grabbing all the gmbus/aux mutexes. No one
> > > > else should be holding any more than one of those at a time so
> > > > the lock ordering here shouldn't matter.
> > > >
> > > > Signed-off-by: Ville Syrjälä <ville.syrj...@linux.intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/display/intel_cdclk.c | 23 ++++++++++++++++++++++
> > > >  1 file changed, 23 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c 
> > > > b/drivers/gpu/drm/i915/display/intel_cdclk.c
> > > > index 0741d643455b..f69bf4a4eb1c 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_cdclk.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_cdclk.c
> > > > @@ -1868,6 +1868,9 @@ static void intel_set_cdclk(struct 
> > > > drm_i915_private *dev_priv,
> > > >                             const struct intel_cdclk_config 
> > > > *cdclk_config,
> > > >                             enum pipe pipe)
> > > >  {
> > > > +       struct intel_encoder *encoder;
> > > > +       unsigned int aux_mutex_lockclass = 0;
> > > > +
> > > >         if (!intel_cdclk_changed(&dev_priv->cdclk.hw, cdclk_config))
> > > >                 return;
> > > >  
> > > > @@ -1876,8 +1879,28 @@ static void intel_set_cdclk(struct 
> > > > drm_i915_private *dev_priv,
> > > >  
> > > >         intel_dump_cdclk_config(cdclk_config, "Changing CDCLK to");
> > > >  
> > > > +       /*
> > > > +        * Lock aux/gmbus while we change cdclk in case those
> > > > +        * functions use cdclk. Not all platforms/ports do,
> > > > +        * but we'll lock them all for simplicity.
> > > > +        */
> > > > +       mutex_lock(&dev_priv->gmbus_mutex);
> > > > +       for_each_intel_dp(&dev_priv->drm, encoder) {
> > > > +               struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
> > > > +
> > > > +               mutex_lock_nested(&intel_dp->aux.hw_mutex,
> > > > +                                 aux_mutex_lockclass++);
> > > > +       }
> > > > +
> > > >         dev_priv->display.set_cdclk(dev_priv, cdclk_config, pipe);
> > > >  
> > > > +       for_each_intel_dp(&dev_priv->drm, encoder) {
> > > > +               struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
> > > > +
> > > > +               mutex_unlock(&intel_dp->aux.hw_mutex);
> > > > +       }
> > > > +       mutex_unlock(&dev_priv->gmbus_mutex);
> > > > +
> > > 
> > > I'm becoming increasingly sensitive to directly touching the private
> > > parts of other modules... gmbus_mutex is really for intel_gmbus.c and
> > > aux.hw_mutex for drm_dp_helper.c.
> > > 
> > > One could also argue that the cdclk is a lower level function used by
> > > higher level functions aux/gmbus, and it seems like the higher level
> > > function should lock the cdclk while it depends on it, not the other way
> > > around.
> > 
> > That would require a rwsem. Otherwise it all gets serialized needlessly.
> > Not sure what's the state of rwsems these days, but IIRC at some point
> > the rt patches converted them all to normal mutexes.
> 
> Some googling suggests that my infromation may be out of date. So we
> could introduce an rwsem for this I guess. Would add a bit more cost
> to the common codepaths though (not that they're really performance
> critical) whereas this version only adds extra cost to the much more
> rare .set_cdclk() path.

Decided to not bother with the rwsem and just sent a v2 with
the mutex_lock_nest_lock().

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to