On 8/5/2025 7:25 PM, Jani Nikula wrote:
On Tue, 05 Aug 2025, Ankit Nautiyal <ankit.k.nauti...@intel.com> wrote:
Ensure num_scaler_users does not exceed the size of scaler_state->scalers[]
before accessing scaler parameters in dsc_prefill_latency.

Signed-off-by: Ankit Nautiyal <ankit.k.nauti...@intel.com>
---
  drivers/gpu/drm/i915/display/skl_watermark.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c 
b/drivers/gpu/drm/i915/display/skl_watermark.c
index 5a120c1f66f4..9d52727b81b1 100644
--- a/drivers/gpu/drm/i915/display/skl_watermark.c
+++ b/drivers/gpu/drm/i915/display/skl_watermark.c
@@ -2191,7 +2191,8 @@ dsc_prefill_latency(const struct intel_crtc_state 
*crtc_state)
if (!crtc_state->dsc.compression_enable ||
            !num_scaler_users ||
-           num_scaler_users > crtc->num_scalers)
+           num_scaler_users > crtc->num_scalers ||
+           num_scaler_users > ARRAY_SIZE(scaler_state->scalers))
Currently this can't happen. crtc->num_scalers is initialized from
num_scalers[pipe] member of display runtime data, which gets initialized
in __intel_display_device_info_runtime_init().

The only way this could happen is if some platform gains more scalers
per pipe than SKL_NUM_SCALERS. But if that happens, we really want to
fail loudly instead of silently falling back to dsc_prefill_latency,
right?

I'd rather see

        drm_WARN_ON(display->drm, crtc->num_scalers > SKL_NUM_SCALERS);

in intel_crtc_init() than this change.


Thanks for the clarification. My initial concern was that we're indexing into scaler_state->scalers[] using num_scaler_users, so I added the bounds check to avoid potential out-of-bounds access. But I agree with your point to handle this in crtc_init(),

where num_scalers are set.  I'll drop this change, and add a separate patch to check crtc->num_scalers in intel_crtc_init().

Regards,

Ankit


                return dsc_prefill_latency;
dsc_prefill_latency = DIV_ROUND_UP(15 * linetime * chroma_downscaling_factor, 10);

Reply via email to