Quoting Tvrtko Ursulin (2020-11-20 09:56:35) > From: Tvrtko Ursulin <tvrtko.ursu...@intel.com> > > Guc->mmio_msg is set under the guc->irq_lock in guc_get_mmio_msg so it > should be consumed under the same lock from guc_handle_mmio_msg. > > I am not sure if the overall flow here makes complete sense but at least > the correct lock is now used. > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@intel.com> > Cc: Daniele Ceraolo Spurio <daniele.ceraolospu...@intel.com> > --- > drivers/gpu/drm/i915/gt/uc/intel_uc.c | 16 ++++++---------- > 1 file changed, 6 insertions(+), 10 deletions(-) > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c > b/drivers/gpu/drm/i915/gt/uc/intel_uc.c > index 4e6070e95fe9..220626c3ad81 100644 > --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c > +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c > @@ -175,19 +175,15 @@ static void guc_get_mmio_msg(struct intel_guc *guc) > > static void guc_handle_mmio_msg(struct intel_guc *guc) > { > - struct drm_i915_private *i915 = guc_to_gt(guc)->i915; > - > /* we need communication to be enabled to reply to GuC */ > GEM_BUG_ON(!guc_communication_enabled(guc)); > > - if (!guc->mmio_msg) > - return; > - > - spin_lock_irq(&i915->irq_lock); > - intel_guc_to_host_process_recv_msg(guc, &guc->mmio_msg, 1); > - spin_unlock_irq(&i915->irq_lock); > - > - guc->mmio_msg = 0; > + spin_lock_irq(&guc->irq_lock); > + if (guc->mmio_msg) { > + intel_guc_to_host_process_recv_msg(guc, &guc->mmio_msg, 1); > + guc->mmio_msg = 0; > + } > + spin_unlock_irq(&guc->irq_lock);
Based on just looking at mmio_msg, the locking should be guc->irq_lock, and guc->mmio_msg = 0 should be pulled under the lock. Reviewed-by: Chris Wilson <ch...@chris-wilson.co.uk> -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx