> -----Original Message-----
> From: De Marchi, Lucas <lucas.demar...@intel.com>
> Sent: Wednesday, April 6, 2022 10:46 AM
> To: Srivatsa, Anusha <anusha.sriva...@intel.com>
> Cc: intel-gfx@lists.freedesktop.org
> Subject: Re: [Intel-gfx] [PATCH] drm/i915/dmc: Add MMIO range restrictions
>
> On Wed, Apr 06, 2022 at 10:16:55AM -0700, Anusha Srivatsa wrote:
> >
> >
> >> -----Original Message-----
> >> From: De Marchi, Lucas <lucas.demar...@intel.com>
> >> Sent: Tuesday, April 5, 2022 11:03 AM
> >> To: Srivatsa, Anusha <anusha.sriva...@intel.com>
> >> Cc: intel-gfx@lists.freedesktop.org
> >> Subject: Re: [Intel-gfx] [PATCH] drm/i915/dmc: Add MMIO range
> >> restrictions
> >>
> >> On Tue, Apr 05, 2022 at 10:14:29AM -0700, Anusha Srivatsa wrote:
> >> >Bspec has added some steps that check for DMC MMIO range before
> >> >programming them.
> >> >
> >> >v2: Fix for CI failure for v1
> >> >
> >> >Cc: Lucas De Marchi <lucas.demar...@intel.com>
> >> >Signed-off-by: Anusha Srivatsa <anusha.sriva...@intel.com>
> >> >---
> >> > drivers/gpu/drm/i915/display/intel_dmc.c | 42
> >> ++++++++++++++++++++++++
> >> > 1 file changed, 42 insertions(+)
> >> >
> >> >diff --git a/drivers/gpu/drm/i915/display/intel_dmc.c
> >> >b/drivers/gpu/drm/i915/display/intel_dmc.c
> >> >index 257cf662f9f4..05d8e90854ec 100644
> >> >--- a/drivers/gpu/drm/i915/display/intel_dmc.c
> >> >+++ b/drivers/gpu/drm/i915/display/intel_dmc.c
> >> >@@ -103,6 +103,18 @@ MODULE_FIRMWARE(BXT_DMC_PATH);
> >> > #define DMC_V1_MAX_MMIO_COUNT 8
> >> > #define DMC_V3_MAX_MMIO_COUNT 20
> >> > #define DMC_V1_MMIO_START_RANGE 0x80000
> >> >+#define TGL_MAIN_MMIO_START 0x8F000
> >> >+#define TGL_MAIN_MMIO_END 0x8FFFF
> >> >+#define TGL_PIPEA_MMIO_START 0x92000
> >> >+#define TGL_PIPEA_MMIO_END 0x93FFF
> >> >+#define TGL_PIPEB_MMIO_START 0x96000
> >> >+#define TGL_PIPEB_MMIO_END 0x97FFF
> >> >+#define TGL_PIPEC_MMIO_START 0x9A000
> >> >+#define TGL_PIPEC_MMIO_END 0x9BFFF
> >> >+#define TGL_PIPED_MMIO_START 0x9E000
> >> >+#define TGL_PIPED_MMIO_END 0x9FFFF
> >> >+#define ADLP_PIPE_MMIO_START 0x5F000
> >> >+#define ADLP_PIPE_MMIO_END 0x5FFFF
> >> >
> >> > struct intel_css_header {
> >> > /* 0x09 for DMC */
> >> >@@ -374,6 +386,30 @@ static void dmc_set_fw_offset(struct intel_dmc
> >> *dmc,
> >> > }
> >> > }
> >> >
> >> >+static bool dmc_mmio_addr_sanity_check(struct intel_dmc *dmc, const
> >> >+u32 *mmioaddr,
> >> >+u32 mmio_count)
> >> >+{
> >> >+ struct drm_i915_private *i915 = container_of(dmc, typeof(*i915),
> >> dmc);
> >> >+ int i;
> >> >+
> >> >+ if (IS_DG2(i915) || IS_ALDERLAKE_P(i915)) {
> >> >+ for (i = 0; i < mmio_count; i++) {
> >> >+ if (!((mmioaddr[i] >= TGL_MAIN_MMIO_START &&
> >> mmioaddr[i] <= TGL_MAIN_MMIO_END) ||
> >> >+ (mmioaddr[i] >= ADLP_PIPE_MMIO_START &&
> >> mmioaddr[i] <= ADLP_PIPE_MMIO_END)))
> >> >+ return false;
> >> >+ }
> >> >+ } else if (IS_TIGERLAKE(i915) || IS_DG1(i915) ||
> >> IS_ALDERLAKE_S(i915))
> >> >+ for (i = 0; i < mmio_count; i++) {
> >> >+ if (!((mmioaddr[i] >= TGL_MAIN_MMIO_START &&
> >> mmioaddr[i] <= TGL_MAIN_MMIO_END) ||
> >> >+ (mmioaddr[i] >= TGL_PIPEA_MMIO_START &&
> >> mmioaddr[i] <= TGL_PIPEA_MMIO_END) ||
> >> >+ (mmioaddr[i] >= TGL_PIPEB_MMIO_START &&
> >> mmioaddr[i] <= TGL_PIPEB_MMIO_END) ||
> >> >+ (mmioaddr[i] >= TGL_PIPEC_MMIO_START &&
> >> mmioaddr[i] <= TGL_PIPEC_MMIO_END) ||
> >> >+ (mmioaddr[i] >= TGL_PIPED_MMIO_START &&
> >> mmioaddr[i] <= TGL_PIPEC_MMIO_END)))
> >> >+ return false;
> >>
> >> wonder if we should check for each pipe DMC range independently
> >> rather than just checking all the ranges.
> > Can convert this to a switch case in that scenario. "If it is PIPE A then
> > it must
> be within this range". But it will be 2 switches one for DG2 and ADLP and one
> for TGL and the rest which have individual ranges for every pipe.
>
> I was thinking more about like this:
>
> #define _TGL_PIPEA_MMIO 0x92000
> #define _TGL_PIPEB_MMIO 0x96000
> #define TGL_PIPE_MMIO(pipe) _MMIO_PIPE(pipe, _TGL_PIPEA_MMIO,
> _TGL_PIPEB_MMIO)
> #define TGL_PIPE_MMIO_SIZE 0x1000
Hmm, does it make sense to add something like:
#define DMC_MMIO(dmc_id) _MMIO(_PICK(DMC_ID, DMC_FW_MAIN, DMC_FW_PIPEA,
DMC_FW_PIPEB, DMC_FW_PIPEC, DMC_FW_PIPED)
> This of course means that each blob is supposed to update only addresses
> on their own ranges. Is this true? Do we care?
Yes, pipe A will update its own range and so on.
> >
> >> >+ }
> >> >+ return true;
> >> >+}
> >> >+
> >> > static u32 parse_dmc_fw_header(struct intel_dmc *dmc,
> >> > const struct intel_dmc_header_base
> >> *dmc_header,
> >> > size_t rem_size, u8 dmc_id) @@ -443,6 +479,12
> @@ static
> >> >u32 parse_dmc_fw_header(struct intel_dmc
> >> *dmc,
> >> > return 0;
> >> > }
> >> >
> >> >+ if (dmc_header->header_ver == 3) {
> >>
> >> this also needs to be done for ver 2
> >For v2 though there has been no update about the start range. As in this
> mmio range is different from the RAM_MMIO_START range.
>
> it is the same situation as v3. We read it from firmware. Why do you simply
> trust the value in v2 but you don't trust it in v3? You removed the check in
> 3d5928a168a9 ("drm/i915/xelpd: Pipe A DMC plugging")
>
> for (i = 0; i < mmio_count; i++) {
> - if (mmioaddr[i] < DMC_MMIO_START_RANGE ||
> - mmioaddr[i] > DMC_MMIO_END_RANGE) {
> - drm_err(&i915->drm, "DMC firmware has wrong mmio
> address
> 0x%x\n",
> - mmioaddr[i]);
> - return 0;
> - }
> dmc_info->mmioaddr[i] = _MMIO(mmioaddr[i]);
> dmc_info->mmiodata[i] = mmiodata[i];
> }
>
> I remember mentioning this during review, but let it pass.
Thanks for the above patch, will do the needful.
Anusha
> Lucas De Marchi
>
> >
> >Anusha
> >
> >> Lucas De Marchi