On 20/11/2020 14:26, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-11-20 09:56:35)
>> From: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
>>
>> Guc->mmio_msg is set under the guc->irq_lock in guc_get_mmio_msg so it
>> should be consumed under the same lock from guc_handle_mmio_msg.
>>
>> I am not sure if the overall flow here makes complete sense but at least
>> the correct lock is now used.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
>> Cc: Daniele Ceraolo Spurio <daniele.ceraolospu...@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_uc.c | 16 ++++++----------
>>   1 file changed, 6 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
>> b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> index 4e6070e95fe9..220626c3ad81 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> @@ -175,19 +175,15 @@ static void guc_get_mmio_msg(struct intel_guc *guc)
>>   
>>   static void guc_handle_mmio_msg(struct intel_guc *guc)
>>   {
>> -       struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
>> -
>>          /* we need communication to be enabled to reply to GuC */
>>          GEM_BUG_ON(!guc_communication_enabled(guc));
>>   
>> -       if (!guc->mmio_msg)
>> -               return;
>> -
>> -       spin_lock_irq(&i915->irq_lock);
>> -       intel_guc_to_host_process_recv_msg(guc, &guc->mmio_msg, 1);
>> -       spin_unlock_irq(&i915->irq_lock);
>> -
>> -       guc->mmio_msg = 0;
>> +       spin_lock_irq(&guc->irq_lock);
>> +       if (guc->mmio_msg) {
>> +               intel_guc_to_host_process_recv_msg(guc, &guc->mmio_msg, 1);
>> +               guc->mmio_msg = 0;
>> +       }
>> +       spin_unlock_irq(&guc->irq_lock);
> 
> Based on just looking at mmio_msg, the locking should be guc->irq_lock, and
> guc->mmio_msg = 0 should be pulled under the lock.
> 
> Reviewed-by: Chris Wilson <ch...@chris-wilson.co.uk>

Thanks, the thing which made me say that I am not sure it completely makes 
sense is that the mmio_msg appears to only be used from 
guc_enable_communication and 
guc_disable_communication, which I would assume should be mutually exclusive by 
itself already. So I was not sure what value is there in the locking around 
mmio_msg access.

And even in guc_enable_communication we have a sequence of:

        guc_get_mmio_msg(guc);
        guc_handle_mmio_msg(guc);

Which expands to:

static void guc_get_mmio_msg(struct intel_guc *guc)
{
        u32 val;

        spin_lock_irq(&guc->irq_lock);

        val = intel_uncore_read(guc_to_gt(guc)->uncore, SOFT_SCRATCH(15));
        guc->mmio_msg |= val & guc->msg_enabled_mask;

        /*
         * clear all events, including the ones we're not currently servicing,
         * to make sure we don't try to process a stale message if we enable
         * handling of more events later.
         */
        guc_clear_mmio_msg(guc);

        spin_unlock_irq(&guc->irq_lock);
}

static void guc_handle_mmio_msg(struct intel_guc *guc)
{
        /* we need communication to be enabled to reply to GuC */
        GEM_BUG_ON(!guc_communication_enabled(guc));

        spin_lock_irq(&guc->irq_lock);
        if (guc->mmio_msg) {
                intel_guc_to_host_process_recv_msg(guc, &guc->mmio_msg, 1);
                guc->mmio_msg = 0;
        }
        spin_unlock_irq(&guc->irq_lock);
}

So it seems a bit pointless. Nevertheless I only wanted to remove usage of 
i915->irq_lock.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to