On Mon, 20 Mar 2023, Umesh Nerlige Ramappa <umesh.nerlige.rama...@intel.com> 
wrote:
> On Mon, Mar 20, 2023 at 12:16:17PM +0200, Jani Nikula wrote:
>>On Fri, 17 Mar 2023, Umesh Nerlige Ramappa <umesh.nerlige.rama...@intel.com> 
>>wrote:
>>> From: Chris Wilson <chris.p.wil...@linux.intel.com>
>>>
>>> If we fail to adjust the GuC run-control on opening the perf stream,
>>> make sure we unwind the wakeref just taken.
>>>
>>> v2: Retain old goto label names (Ashutosh)
>>>
>>> Fixes: 01e742746785 ("drm/i915/guc: Support OA when Wa_16011777198 is 
>>> enabled")
>>> Signed-off-by: Chris Wilson <chris.p.wil...@linux.intel.com>
>>> Reviewed-by: Ashutosh Dixit <ashutosh.di...@intel.com>
>>> ---
>>>  drivers/gpu/drm/i915/i915_perf.c       | 14 +++++++++-----
>>>  drivers/gpu/drm/i915/i915_perf_types.h |  6 ++++++
>>>  2 files changed, 15 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_perf.c 
>>> b/drivers/gpu/drm/i915/i915_perf.c
>>> index 824a34ec0b83..283a4a3c6862 100644
>>> --- a/drivers/gpu/drm/i915/i915_perf.c
>>> +++ b/drivers/gpu/drm/i915/i915_perf.c
>>> @@ -1592,9 +1592,7 @@ static void i915_oa_stream_destroy(struct 
>>> i915_perf_stream *stream)
>>>     /*
>>>      * Wa_16011777198:dg2: Unset the override of GUCRC mode to enable rc6.
>>>      */
>>> -   if (intel_uc_uses_guc_rc(&gt->uc) &&
>>> -       (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_C0) ||
>>> -        IS_DG2_GRAPHICS_STEP(gt->i915, G11, STEP_A0, STEP_B0)))
>>> +   if (stream->override_gucrc)
>>>             drm_WARN_ON(&gt->i915->drm,
>>>                         intel_guc_slpc_unset_gucrc_mode(&gt->uc.guc.slpc));
>>>
>>> @@ -3305,8 +3303,10 @@ static int i915_oa_stream_init(struct 
>>> i915_perf_stream *stream,
>>>             if (ret) {
>>>                     drm_dbg(&stream->perf->i915->drm,
>>>                             "Unable to override gucrc mode\n");
>>> -                   goto err_config;
>>> +                   goto err_gucrc;
>>>             }
>>> +
>>> +           stream->override_gucrc = true;
>>>     }
>>>
>>>     ret = alloc_oa_buffer(stream);
>>> @@ -3345,11 +3345,15 @@ static int i915_oa_stream_init(struct 
>>> i915_perf_stream *stream,
>>>     free_oa_buffer(stream);
>>>
>>>  err_oa_buf_alloc:
>>> -   free_oa_configs(stream);
>>> +   if (stream->override_gucrc)
>>> +           intel_guc_slpc_unset_gucrc_mode(&gt->uc.guc.slpc);
>>>
>>> +err_gucrc:
>>>     intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL);
>>>     intel_engine_pm_put(stream->engine);
>>>
>>> +   free_oa_configs(stream);
>>> +
>>>  err_config:
>>>     free_noa_wait(stream);
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_perf_types.h 
>>> b/drivers/gpu/drm/i915/i915_perf_types.h
>>> index ca150b7af3f2..e36f046fe2b6 100644
>>> --- a/drivers/gpu/drm/i915/i915_perf_types.h
>>> +++ b/drivers/gpu/drm/i915/i915_perf_types.h
>>> @@ -316,6 +316,12 @@ struct i915_perf_stream {
>>>      * buffer should be checked for available data.
>>>      */
>>>     u64 poll_oa_period;
>>> +
>>> +   /**
>>> +    * @override_gucrc: GuC RC has been overridden for the perf stream,
>>> +    * and we need to restore the default configuration on release.
>>> +    */
>>> +   bool override_gucrc:1;
>>
>>What do you gain by making this a bitfield? It's already a big struct,
>>and there already are other bool flags.
>
> Noticed it now. I guess this is not portable as it's compiler dependent.  
> I would just remove the bitfield. I do see a few occurrences of bitfield 
> bools in i915 though, so any specific guidelines on when to use bool vs 
> bitfields?

Use it when you really need the space saving, but don't mind the added
cost for accessing the bit. And then you do need to pay attention to how
the struct members are added to not do silly padding.

The VBT code uses bitfields for parsing the binary data, it's
implementation dependent behaviour but it works for us. Need to be
mindful about it.

BR,
Jani.


>
> Thanks,
> Umesh
>>
>>BR,
>>Jani.
>>
>>
>>
>>>  };
>>>
>>>  /**
>>
>>-- 
>>Jani Nikula, Intel Open Source Graphics Center

-- 
Jani Nikula, Intel Open Source Graphics Center

Reply via email to