Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-26 Thread Dmitry Baryshkov
On Thu, Jun 27, 2024 at 03:22:18AM GMT, Akhil P Oommen wrote:
> << snip >>
> 
> > > > > > > @@ -1503,7 +1497,7 @@ static void __iomem 
> > > > > > > *a6xx_gmu_get_mmio(struct platform_device *pdev,
> > > > > > > return ERR_PTR(-EINVAL);
> > > > > > > }
> > > > > > >
> > > > > > > -   ret = ioremap(res->start, resource_size(res));
> > > > > > > +   ret = devm_ioremap_resource(>dev, res);
> > > > > >
> > > > > > So, this doesn't actually work, failing in 
> > > > > > __request_region_locked(),
> > > > > > because the gmu region partially overlaps with the gpucc region 
> > > > > > (which
> > > > > > is busy).  I think this is intentional, since gmu is controlling the
> > > > > > gpu clocks, etc.  In particular REG_A6XX_GPU_CC_GX_GDSCR is in this
> > > > > > overlapping region.  Maybe Akhil knows more about GMU.
> > > > >
> > > > > We don't really need to map gpucc region from driver on behalf of gmu.
> > > > > Since we don't access any gpucc register from drm-msm driver, we can
> > > > > update the range size to correct this. But due to backward 
> > > > > compatibility
> > > > > requirement with older dt, can we still enable region locking? I 
> > > > > prefer
> > > > > it if that is possible.
> > > >
> > > > Actually, when I reduced the region size to not overlap with gpucc,
> > > > the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4.
> > > >
> > > > So I guess that register is actually part of gpucc?
> > >
> > > Yes. It has *GPU_CC* in its name. :P
> > >
> > > I just saw that we program this register on legacy a6xx targets to
> > > ensure retention is really ON before collapsing gdsc. So we can't
> > > avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate!
> > 
> > I guess we could still use devm_ioremap().. idk if there is a better
> > way to solve this
> 
> Can we do it without breaking backward compatibility with dt?

I think a proper way would be to use devm_ioremap in the gpucc driver,
then the GPU driver can use devm_platform_ioremap_resource().

I'll take a look at sketching the gpucc patches in one of the next few
days.

> 
> -Akhil
> 
> > 
> > BR,
> > -R
> > 
> > > -Akhil.
> > >
> > > >
> > > > BR,
> > > > -R

-- 
With best wishes
Dmitry


Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-26 Thread Akhil P Oommen
<< snip >>

> > > > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
> > > > > > platform_device *pdev,
> > > > > > return ERR_PTR(-EINVAL);
> > > > > > }
> > > > > >
> > > > > > -   ret = ioremap(res->start, resource_size(res));
> > > > > > +   ret = devm_ioremap_resource(>dev, res);
> > > > >
> > > > > So, this doesn't actually work, failing in __request_region_locked(),
> > > > > because the gmu region partially overlaps with the gpucc region (which
> > > > > is busy).  I think this is intentional, since gmu is controlling the
> > > > > gpu clocks, etc.  In particular REG_A6XX_GPU_CC_GX_GDSCR is in this
> > > > > overlapping region.  Maybe Akhil knows more about GMU.
> > > >
> > > > We don't really need to map gpucc region from driver on behalf of gmu.
> > > > Since we don't access any gpucc register from drm-msm driver, we can
> > > > update the range size to correct this. But due to backward compatibility
> > > > requirement with older dt, can we still enable region locking? I prefer
> > > > it if that is possible.
> > >
> > > Actually, when I reduced the region size to not overlap with gpucc,
> > > the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4.
> > >
> > > So I guess that register is actually part of gpucc?
> >
> > Yes. It has *GPU_CC* in its name. :P
> >
> > I just saw that we program this register on legacy a6xx targets to
> > ensure retention is really ON before collapsing gdsc. So we can't
> > avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate!
> 
> I guess we could still use devm_ioremap().. idk if there is a better
> way to solve this

Can we do it without breaking backward compatibility with dt?

-Akhil

> 
> BR,
> -R
> 
> > -Akhil.
> >
> > >
> > > BR,
> > > -R


Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-25 Thread Rob Clark
On Tue, Jun 25, 2024 at 1:23 PM Akhil P Oommen  wrote:
>
> On Tue, Jun 25, 2024 at 11:03:42AM -0700, Rob Clark wrote: > On Tue, Jun 25, 
> 2024 at 10:59 AM Akhil P Oommen  wrote:
> > >
> > > On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote:
> > > > On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian
> > > >  wrote:
> > > > >
> > > > > The driver's memory regions are currently just ioremap()ed, but not
> > > > > reserved through a request. That's not a bug, but having the request 
> > > > > is
> > > > > a little more robust.
> > > > >
> > > > > Implement the region-request through the corresponding managed
> > > > > devres-function.
> > > > >
> > > > > Signed-off-by: Kiarash Hajian 
> > > > > ---
> > > > > Changes in v6:
> > > > > -Fix compile error
> > > > > -Link to v5: 
> > > > > https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc...@gmail.com
> > > > >
> > > > > Changes in v5:
> > > > > - Fix error hanlding problems.
> > > > > - Link to v4: 
> > > > > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com
> > > > >
> > > > > Changes in v4:
> > > > > - Combine v3 commits into a singel commit
> > > > > - Link to v3: 
> > > > > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com
> > > > >
> > > > > Changes in v3:
> > > > > - Remove redundant devm_iounmap calls, relying on devres for 
> > > > > automatic resource cleanup.
> > > > >
> > > > > Changes in v2:
> > > > > - update the subject prefix to "drm/msm/a6xx:", to match the 
> > > > > majority of other changes to this file.
> > > > > ---
> > > > >  drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 
> > > > > +++--
> > > > >  1 file changed, 11 insertions(+), 22 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> > > > > b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > > > index 8bea8ef26f77..d26cc6254ef9 100644
> > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > > > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu 
> > > > > *gmu)
> > > > > bool pdc_in_aop = false;
> > > > >
> > > > > if (IS_ERR(pdcptr))
> > > > > -   goto err;
> > > > > +   return;
> > > > >
> > > > > if (adreno_is_a650(adreno_gpu) ||
> > > > > adreno_is_a660_family(adreno_gpu) ||
> > > > > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu 
> > > > > *gmu)
> > > > > if (!pdc_in_aop) {
> > > > > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq");
> > > > > if (IS_ERR(seqptr))
> > > > > -   goto err;
> > > > > +   return;
> > > > > }
> > > > >
> > > > > /* Disable SDE clock gating */
> > > > > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu 
> > > > > *gmu)
> > > > > wmb();
> > > > >
> > > > > a6xx_rpmh_stop(gmu);
> > > > > -
> > > > > -err:
> > > > > -   if (!IS_ERR_OR_NULL(pdcptr))
> > > > > -   iounmap(pdcptr);
> > > > > -   if (!IS_ERR_OR_NULL(seqptr))
> > > > > -   iounmap(seqptr);
> > > > >  }
> > > > >
> > > > >  /*
> > > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
> > > > > platform_device *pdev,
> > > > > return ERR_PTR(-EINVAL);
> > > > > }
> > > > >
> > > > > -   ret = ioremap(res->start, resource_size(res));
> > > > > +   ret = devm_ioremap_resource(>dev, res);
> > > >
> > > > So, this doesn't actually work, failing in __request_region_locked(),
> > > > because the gmu region partially overlaps with the gpucc region (which
> > > > is busy).  I think this is intentional, since gmu is controlling the
> > > > gpu clocks, etc.  In particular REG_A6XX_GPU_CC_GX_GDSCR is in this
> > > > overlapping region.  Maybe Akhil knows more about GMU.
> > >
> > > We don't really need to map gpucc region from driver on behalf of gmu.
> > > Since we don't access any gpucc register from drm-msm driver, we can
> > > update the range size to correct this. But due to backward compatibility
> > > requirement with older dt, can we still enable region locking? I prefer
> > > it if that is possible.
> >
> > Actually, when I reduced the region size to not overlap with gpucc,
> > the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4.
> >
> > So I guess that register is actually part of gpucc?
>
> Yes. It has *GPU_CC* in its name. :P
>
> I just saw that we program this register on legacy a6xx targets to
> ensure retention is really ON before collapsing gdsc. So we can't
> avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate!

I guess we could still use devm_ioremap().. idk if there is a better
way to solve this

BR,
-R

> -Akhil.
>
> >
> > BR,
> > -R
> >
> > > FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So
> > > gpucc region has to be 

Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-25 Thread Akhil P Oommen
On Tue, Jun 25, 2024 at 11:03:42AM -0700, Rob Clark wrote: > On Tue, Jun 25, 
2024 at 10:59 AM Akhil P Oommen  wrote:
> >
> > On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote:
> > > On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian
> > >  wrote:
> > > >
> > > > The driver's memory regions are currently just ioremap()ed, but not
> > > > reserved through a request. That's not a bug, but having the request is
> > > > a little more robust.
> > > >
> > > > Implement the region-request through the corresponding managed
> > > > devres-function.
> > > >
> > > > Signed-off-by: Kiarash Hajian 
> > > > ---
> > > > Changes in v6:
> > > > -Fix compile error
> > > > -Link to v5: 
> > > > https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc...@gmail.com
> > > >
> > > > Changes in v5:
> > > > - Fix error hanlding problems.
> > > > - Link to v4: 
> > > > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com
> > > >
> > > > Changes in v4:
> > > > - Combine v3 commits into a singel commit
> > > > - Link to v3: 
> > > > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com
> > > >
> > > > Changes in v3:
> > > > - Remove redundant devm_iounmap calls, relying on devres for 
> > > > automatic resource cleanup.
> > > >
> > > > Changes in v2:
> > > > - update the subject prefix to "drm/msm/a6xx:", to match the 
> > > > majority of other changes to this file.
> > > > ---
> > > >  drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 
> > > > +++--
> > > >  1 file changed, 11 insertions(+), 22 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> > > > b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > > index 8bea8ef26f77..d26cc6254ef9 100644
> > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > > > bool pdc_in_aop = false;
> > > >
> > > > if (IS_ERR(pdcptr))
> > > > -   goto err;
> > > > +   return;
> > > >
> > > > if (adreno_is_a650(adreno_gpu) ||
> > > > adreno_is_a660_family(adreno_gpu) ||
> > > > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > > > if (!pdc_in_aop) {
> > > > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq");
> > > > if (IS_ERR(seqptr))
> > > > -   goto err;
> > > > +   return;
> > > > }
> > > >
> > > > /* Disable SDE clock gating */
> > > > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu 
> > > > *gmu)
> > > > wmb();
> > > >
> > > > a6xx_rpmh_stop(gmu);
> > > > -
> > > > -err:
> > > > -   if (!IS_ERR_OR_NULL(pdcptr))
> > > > -   iounmap(pdcptr);
> > > > -   if (!IS_ERR_OR_NULL(seqptr))
> > > > -   iounmap(seqptr);
> > > >  }
> > > >
> > > >  /*
> > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
> > > > platform_device *pdev,
> > > > return ERR_PTR(-EINVAL);
> > > > }
> > > >
> > > > -   ret = ioremap(res->start, resource_size(res));
> > > > +   ret = devm_ioremap_resource(>dev, res);
> > >
> > > So, this doesn't actually work, failing in __request_region_locked(),
> > > because the gmu region partially overlaps with the gpucc region (which
> > > is busy).  I think this is intentional, since gmu is controlling the
> > > gpu clocks, etc.  In particular REG_A6XX_GPU_CC_GX_GDSCR is in this
> > > overlapping region.  Maybe Akhil knows more about GMU.
> >
> > We don't really need to map gpucc region from driver on behalf of gmu.
> > Since we don't access any gpucc register from drm-msm driver, we can
> > update the range size to correct this. But due to backward compatibility
> > requirement with older dt, can we still enable region locking? I prefer
> > it if that is possible.
> 
> Actually, when I reduced the region size to not overlap with gpucc,
> the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4.
> 
> So I guess that register is actually part of gpucc?

Yes. It has *GPU_CC* in its name. :P

I just saw that we program this register on legacy a6xx targets to
ensure retention is really ON before collapsing gdsc. So we can't
avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate! 

-Akhil.

> 
> BR,
> -R
> 
> > FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So
> > gpucc region has to be mapped by kgsl and that is reflected in the kgsl
> > device tree.
> >
> > -Akhil
> >
> > >
> > > BR,
> > > -R
> > >
> > > > if (!ret) {
> > > > DRM_DEV_ERROR(>dev, "Unable to map the %s 
> > > > registers\n", name);
> > > > return ERR_PTR(-EINVAL);
> > > > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu 
> > > > *a6xx_gpu, struct device_node *node)
> > > 

Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-25 Thread Rob Clark
On Tue, Jun 25, 2024 at 10:59 AM Akhil P Oommen
 wrote:
>
> On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote:
> > On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian
> >  wrote:
> > >
> > > The driver's memory regions are currently just ioremap()ed, but not
> > > reserved through a request. That's not a bug, but having the request is
> > > a little more robust.
> > >
> > > Implement the region-request through the corresponding managed
> > > devres-function.
> > >
> > > Signed-off-by: Kiarash Hajian 
> > > ---
> > > Changes in v6:
> > > -Fix compile error
> > > -Link to v5: 
> > > https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc...@gmail.com
> > >
> > > Changes in v5:
> > > - Fix error hanlding problems.
> > > - Link to v4: 
> > > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com
> > >
> > > Changes in v4:
> > > - Combine v3 commits into a singel commit
> > > - Link to v3: 
> > > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com
> > >
> > > Changes in v3:
> > > - Remove redundant devm_iounmap calls, relying on devres for 
> > > automatic resource cleanup.
> > >
> > > Changes in v2:
> > > - update the subject prefix to "drm/msm/a6xx:", to match the majority 
> > > of other changes to this file.
> > > ---
> > >  drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 
> > > +++--
> > >  1 file changed, 11 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> > > b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > index 8bea8ef26f77..d26cc6254ef9 100644
> > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > > bool pdc_in_aop = false;
> > >
> > > if (IS_ERR(pdcptr))
> > > -   goto err;
> > > +   return;
> > >
> > > if (adreno_is_a650(adreno_gpu) ||
> > > adreno_is_a660_family(adreno_gpu) ||
> > > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > > if (!pdc_in_aop) {
> > > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq");
> > > if (IS_ERR(seqptr))
> > > -   goto err;
> > > +   return;
> > > }
> > >
> > > /* Disable SDE clock gating */
> > > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > > wmb();
> > >
> > > a6xx_rpmh_stop(gmu);
> > > -
> > > -err:
> > > -   if (!IS_ERR_OR_NULL(pdcptr))
> > > -   iounmap(pdcptr);
> > > -   if (!IS_ERR_OR_NULL(seqptr))
> > > -   iounmap(seqptr);
> > >  }
> > >
> > >  /*
> > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
> > > platform_device *pdev,
> > > return ERR_PTR(-EINVAL);
> > > }
> > >
> > > -   ret = ioremap(res->start, resource_size(res));
> > > +   ret = devm_ioremap_resource(>dev, res);
> >
> > So, this doesn't actually work, failing in __request_region_locked(),
> > because the gmu region partially overlaps with the gpucc region (which
> > is busy).  I think this is intentional, since gmu is controlling the
> > gpu clocks, etc.  In particular REG_A6XX_GPU_CC_GX_GDSCR is in this
> > overlapping region.  Maybe Akhil knows more about GMU.
>
> We don't really need to map gpucc region from driver on behalf of gmu.
> Since we don't access any gpucc register from drm-msm driver, we can
> update the range size to correct this. But due to backward compatibility
> requirement with older dt, can we still enable region locking? I prefer
> it if that is possible.

Actually, when I reduced the region size to not overlap with gpucc,
the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4.

So I guess that register is actually part of gpucc?

BR,
-R

> FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So
> gpucc region has to be mapped by kgsl and that is reflected in the kgsl
> device tree.
>
> -Akhil
>
> >
> > BR,
> > -R
> >
> > > if (!ret) {
> > > DRM_DEV_ERROR(>dev, "Unable to map the %s 
> > > registers\n", name);
> > > return ERR_PTR(-EINVAL);
> > > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu 
> > > *a6xx_gpu, struct device_node *node)
> > > gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu");
> > > if (IS_ERR(gmu->mmio)) {
> > > ret = PTR_ERR(gmu->mmio);
> > > -   goto err_mmio;
> > > +   goto err_cleanup;
> > > }
> > >
> > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx");
> > > if (IS_ERR(gmu->cxpd)) {
> > > ret = PTR_ERR(gmu->cxpd);
> > > -   goto err_mmio;
> > > +   goto err_cleanup;
> > > }
> > >
> > > if (!device_link_add(gmu->dev, gmu->cxpd, 

Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-25 Thread Akhil P Oommen
On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote:
> On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian
>  wrote:
> >
> > The driver's memory regions are currently just ioremap()ed, but not
> > reserved through a request. That's not a bug, but having the request is
> > a little more robust.
> >
> > Implement the region-request through the corresponding managed
> > devres-function.
> >
> > Signed-off-by: Kiarash Hajian 
> > ---
> > Changes in v6:
> > -Fix compile error
> > -Link to v5: 
> > https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc...@gmail.com
> >
> > Changes in v5:
> > - Fix error hanlding problems.
> > - Link to v4: 
> > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com
> >
> > Changes in v4:
> > - Combine v3 commits into a singel commit
> > - Link to v3: 
> > https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com
> >
> > Changes in v3:
> > - Remove redundant devm_iounmap calls, relying on devres for automatic 
> > resource cleanup.
> >
> > Changes in v2:
> > - update the subject prefix to "drm/msm/a6xx:", to match the majority 
> > of other changes to this file.
> > ---
> >  drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 
> > +++--
> >  1 file changed, 11 insertions(+), 22 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> > b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > index 8bea8ef26f77..d26cc6254ef9 100644
> > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > bool pdc_in_aop = false;
> >
> > if (IS_ERR(pdcptr))
> > -   goto err;
> > +   return;
> >
> > if (adreno_is_a650(adreno_gpu) ||
> > adreno_is_a660_family(adreno_gpu) ||
> > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > if (!pdc_in_aop) {
> > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq");
> > if (IS_ERR(seqptr))
> > -   goto err;
> > +   return;
> > }
> >
> > /* Disable SDE clock gating */
> > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> > wmb();
> >
> > a6xx_rpmh_stop(gmu);
> > -
> > -err:
> > -   if (!IS_ERR_OR_NULL(pdcptr))
> > -   iounmap(pdcptr);
> > -   if (!IS_ERR_OR_NULL(seqptr))
> > -   iounmap(seqptr);
> >  }
> >
> >  /*
> > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
> > platform_device *pdev,
> > return ERR_PTR(-EINVAL);
> > }
> >
> > -   ret = ioremap(res->start, resource_size(res));
> > +   ret = devm_ioremap_resource(>dev, res);
> 
> So, this doesn't actually work, failing in __request_region_locked(),
> because the gmu region partially overlaps with the gpucc region (which
> is busy).  I think this is intentional, since gmu is controlling the
> gpu clocks, etc.  In particular REG_A6XX_GPU_CC_GX_GDSCR is in this
> overlapping region.  Maybe Akhil knows more about GMU.

We don't really need to map gpucc region from driver on behalf of gmu.
Since we don't access any gpucc register from drm-msm driver, we can
update the range size to correct this. But due to backward compatibility
requirement with older dt, can we still enable region locking? I prefer
it if that is possible.

FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So
gpucc region has to be mapped by kgsl and that is reflected in the kgsl
device tree.

-Akhil

> 
> BR,
> -R
> 
> > if (!ret) {
> > DRM_DEV_ERROR(>dev, "Unable to map the %s 
> > registers\n", name);
> > return ERR_PTR(-EINVAL);
> > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu 
> > *a6xx_gpu, struct device_node *node)
> > gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu");
> > if (IS_ERR(gmu->mmio)) {
> > ret = PTR_ERR(gmu->mmio);
> > -   goto err_mmio;
> > +   goto err_cleanup;
> > }
> >
> > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx");
> > if (IS_ERR(gmu->cxpd)) {
> > ret = PTR_ERR(gmu->cxpd);
> > -   goto err_mmio;
> > +   goto err_cleanup;
> > }
> >
> > if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) {
> > @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
> > struct device_node *node)
> > gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx");
> > if (IS_ERR(gmu->gxpd)) {
> > ret = PTR_ERR(gmu->gxpd);
> > -   goto err_mmio;
> > +   goto err_cleanup;
> > }
> >
> > gmu->initialized = true;
> > @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
> > struct 

Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-21 Thread Rob Clark
On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian
 wrote:
>
> The driver's memory regions are currently just ioremap()ed, but not
> reserved through a request. That's not a bug, but having the request is
> a little more robust.
>
> Implement the region-request through the corresponding managed
> devres-function.
>
> Signed-off-by: Kiarash Hajian 
> ---
> Changes in v6:
> -Fix compile error
> -Link to v5: 
> https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc...@gmail.com
>
> Changes in v5:
> - Fix error hanlding problems.
> - Link to v4: 
> https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com
>
> Changes in v4:
> - Combine v3 commits into a singel commit
> - Link to v3: 
> https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com
>
> Changes in v3:
> - Remove redundant devm_iounmap calls, relying on devres for automatic 
> resource cleanup.
>
> Changes in v2:
> - update the subject prefix to "drm/msm/a6xx:", to match the majority of 
> other changes to this file.
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++--
>  1 file changed, 11 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index 8bea8ef26f77..d26cc6254ef9 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> bool pdc_in_aop = false;
>
> if (IS_ERR(pdcptr))
> -   goto err;
> +   return;
>
> if (adreno_is_a650(adreno_gpu) ||
> adreno_is_a660_family(adreno_gpu) ||
> @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> if (!pdc_in_aop) {
> seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq");
> if (IS_ERR(seqptr))
> -   goto err;
> +   return;
> }
>
> /* Disable SDE clock gating */
> @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
> wmb();
>
> a6xx_rpmh_stop(gmu);
> -
> -err:
> -   if (!IS_ERR_OR_NULL(pdcptr))
> -   iounmap(pdcptr);
> -   if (!IS_ERR_OR_NULL(seqptr))
> -   iounmap(seqptr);
>  }
>
>  /*
> @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
> platform_device *pdev,
> return ERR_PTR(-EINVAL);
> }
>
> -   ret = ioremap(res->start, resource_size(res));
> +   ret = devm_ioremap_resource(>dev, res);

So, this doesn't actually work, failing in __request_region_locked(),
because the gmu region partially overlaps with the gpucc region (which
is busy).  I think this is intentional, since gmu is controlling the
gpu clocks, etc.  In particular REG_A6XX_GPU_CC_GX_GDSCR is in this
overlapping region.  Maybe Akhil knows more about GMU.

BR,
-R

> if (!ret) {
> DRM_DEV_ERROR(>dev, "Unable to map the %s registers\n", 
> name);
> return ERR_PTR(-EINVAL);
> @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
> struct device_node *node)
> gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu");
> if (IS_ERR(gmu->mmio)) {
> ret = PTR_ERR(gmu->mmio);
> -   goto err_mmio;
> +   goto err_cleanup;
> }
>
> gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx");
> if (IS_ERR(gmu->cxpd)) {
> ret = PTR_ERR(gmu->cxpd);
> -   goto err_mmio;
> +   goto err_cleanup;
> }
>
> if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) {
> @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
> struct device_node *node)
> gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx");
> if (IS_ERR(gmu->gxpd)) {
> ret = PTR_ERR(gmu->gxpd);
> -   goto err_mmio;
> +   goto err_cleanup;
> }
>
> gmu->initialized = true;
> @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
> struct device_node *node)
>  detach_cxpd:
> dev_pm_domain_detach(gmu->cxpd, false);
>
> -err_mmio:
> -   iounmap(gmu->mmio);
> -
> +err_cleanup:
> /* Drop reference taken in of_find_device_by_node */
> put_device(gmu->dev);
>
> @@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
> device_node *node)
> gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc");
> if (IS_ERR(gmu->rscc)) {
> ret = -ENODEV;
> -   goto err_mmio;
> +   goto err_cleanup;
> }
> } else {
> gmu->rscc = gmu->mmio + 0x23000;
> @@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
> device_node *node)
>
> if 

Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-10 Thread Dmitry Baryshkov
On Sat, Jun 08, 2024 at 11:43:47AM -0400, Kiarash Hajian wrote:
> The driver's memory regions are currently just ioremap()ed, but not
> reserved through a request. That's not a bug, but having the request is
> a little more robust.
> 
> Implement the region-request through the corresponding managed
> devres-function.
> 
> Signed-off-by: Kiarash Hajian 
> ---
> Changes in v6:
> -Fix compile error
> -Link to v5: 
> https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc...@gmail.com
> 
> Changes in v5:
> - Fix error hanlding problems.
> - Link to v4: 
> https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com
> 
> Changes in v4:
> - Combine v3 commits into a singel commit
> - Link to v3: 
> https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com
> 
> Changes in v3:
> - Remove redundant devm_iounmap calls, relying on devres for automatic 
> resource cleanup.
> 
> Changes in v2:
> - update the subject prefix to "drm/msm/a6xx:", to match the majority of 
> other changes to this file.
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++--
>  1 file changed, 11 insertions(+), 22 deletions(-)
> 

Reviewed-by: Dmitry Baryshkov 


-- 
With best wishes
Dmitry


[PATCH] drm/msm/a6xx: request memory region

2024-06-08 Thread Kiarash Hajian
The driver's memory regions are currently just ioremap()ed, but not
reserved through a request. That's not a bug, but having the request is
a little more robust.

Implement the region-request through the corresponding managed
devres-function.

Signed-off-by: Kiarash Hajian 
---
Changes in v6:
-Fix compile error
-Link to v5: 
https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc...@gmail.com

Changes in v5:
- Fix error hanlding problems.
- Link to v4: 
https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com

Changes in v4:
- Combine v3 commits into a singel commit
- Link to v3: 
https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com

Changes in v3:
- Remove redundant devm_iounmap calls, relying on devres for automatic 
resource cleanup.

Changes in v2:
- update the subject prefix to "drm/msm/a6xx:", to match the majority of 
other changes to this file.
---
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++--
 1 file changed, 11 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 8bea8ef26f77..d26cc6254ef9 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
bool pdc_in_aop = false;
 
if (IS_ERR(pdcptr))
-   goto err;
+   return;
 
if (adreno_is_a650(adreno_gpu) ||
adreno_is_a660_family(adreno_gpu) ||
@@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
if (!pdc_in_aop) {
seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq");
if (IS_ERR(seqptr))
-   goto err;
+   return;
}
 
/* Disable SDE clock gating */
@@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
wmb();
 
a6xx_rpmh_stop(gmu);
-
-err:
-   if (!IS_ERR_OR_NULL(pdcptr))
-   iounmap(pdcptr);
-   if (!IS_ERR_OR_NULL(seqptr))
-   iounmap(seqptr);
 }
 
 /*
@@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
platform_device *pdev,
return ERR_PTR(-EINVAL);
}
 
-   ret = ioremap(res->start, resource_size(res));
+   ret = devm_ioremap_resource(>dev, res);
if (!ret) {
DRM_DEV_ERROR(>dev, "Unable to map the %s registers\n", 
name);
return ERR_PTR(-EINVAL);
@@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
struct device_node *node)
gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu");
if (IS_ERR(gmu->mmio)) {
ret = PTR_ERR(gmu->mmio);
-   goto err_mmio;
+   goto err_cleanup;
}
 
gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx");
if (IS_ERR(gmu->cxpd)) {
ret = PTR_ERR(gmu->cxpd);
-   goto err_mmio;
+   goto err_cleanup;
}
 
if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) {
@@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
struct device_node *node)
gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx");
if (IS_ERR(gmu->gxpd)) {
ret = PTR_ERR(gmu->gxpd);
-   goto err_mmio;
+   goto err_cleanup;
}
 
gmu->initialized = true;
@@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
struct device_node *node)
 detach_cxpd:
dev_pm_domain_detach(gmu->cxpd, false);
 
-err_mmio:
-   iounmap(gmu->mmio);
-
+err_cleanup:
/* Drop reference taken in of_find_device_by_node */
put_device(gmu->dev);
 
@@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
device_node *node)
gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc");
if (IS_ERR(gmu->rscc)) {
ret = -ENODEV;
-   goto err_mmio;
+   goto err_cleanup;
}
} else {
gmu->rscc = gmu->mmio + 0x23000;
@@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
device_node *node)
 
if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) {
ret = -ENODEV;
-   goto err_mmio;
+   goto err_cleanup;
}
 
gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx");
if (IS_ERR(gmu->cxpd)) {
ret = PTR_ERR(gmu->cxpd);
-   goto err_mmio;
+   goto err_cleanup;
}
 
link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME);
@@ -1824,10 +1816,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
device_node *node)
 detach_cxpd:
dev_pm_domain_detach(gmu->cxpd, false);
 
-err_mmio:
-   

Re: [PATCH] drm/msm/a6xx: request memory region

2024-06-07 Thread Dmitry Baryshkov
On Fri, Jun 07, 2024 at 10:00:04AM -0400, Kiarash Hajian wrote:
> The driver's memory regions are currently just ioremap()ed, but not
> reserved through a request. That's not a bug, but having the request is
> a little more robust.
> 
> Implement the region-request through the corresponding managed
> devres-function.

Please at least compile-test the patch before sending.

> 
> Signed-off-by: Kiarash Hajian 
> ---
> To: Rob Clark 
> To: Abhinav Kumar 
> To: Dmitry Baryshkov 
> To: Sean Paul 
> To: Marijn Suijten 
> To: David Airlie 
> To: Daniel Vetter 
> Cc: linux-arm-...@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: freedr...@lists.freedesktop.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Kiarash Hajian 
> 
> Changes in v5:
> - Fix errorhanlding problems.
> - Link to v4: 
> https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com
> 
> Changes in v4:
> - Combine v3 commits into a singel commit
> - Link to v3: 
> https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com
> 
> Changes in v3:
> - Remove redundant devm_iounmap calls, relying on devres for automatic 
> resource cleanup.
> 
> Changes in v2:
> - update the subject prefix to "drm/msm/a6xx:", to match the majority of 
> other changes to this file.
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++--
>  1 file changed, 15 insertions(+), 18 deletions(-)
> 

-- 
With best wishes
Dmitry


[PATCH] drm/msm/a6xx: request memory region

2024-06-07 Thread Kiarash Hajian
The driver's memory regions are currently just ioremap()ed, but not
reserved through a request. That's not a bug, but having the request is
a little more robust.

Implement the region-request through the corresponding managed
devres-function.

Signed-off-by: Kiarash Hajian 
---
To: Rob Clark 
To: Abhinav Kumar 
To: Dmitry Baryshkov 
To: Sean Paul 
To: Marijn Suijten 
To: David Airlie 
To: Daniel Vetter 
Cc: linux-arm-...@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: freedr...@lists.freedesktop.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Kiarash Hajian 

Changes in v5:
- Fix errorhanlding problems.
- Link to v4: 
https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a6408...@gmail.com

Changes in v4:
- Combine v3 commits into a singel commit
- Link to v3: 
https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45...@gmail.com

Changes in v3:
- Remove redundant devm_iounmap calls, relying on devres for automatic 
resource cleanup.

Changes in v2:
- update the subject prefix to "drm/msm/a6xx:", to match the majority of 
other changes to this file.
---
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++--
 1 file changed, 15 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 8bea8ef26f77..35323bf2d844 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -635,10 +635,12 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
a6xx_rpmh_stop(gmu);
 
 err:
-   if (!IS_ERR_OR_NULL(pdcptr))
-   iounmap(pdcptr);
-   if (!IS_ERR_OR_NULL(seqptr))
-   iounmap(seqptr);
+   if (!IS_ERR_OR_NULL(pdcptr)){
+return ERR_PTR(-EINVAL);
+   }
+   if (!IS_ERR_OR_NULL(seqptr)){
+return ERR_PTR(-EINVAL);
+   }
 }
 
 /*
@@ -1503,7 +1505,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct 
platform_device *pdev,
return ERR_PTR(-EINVAL);
}
 
-   ret = ioremap(res->start, resource_size(res));
+   ret = devm_ioremap_resource(>dev, res);
if (!ret) {
DRM_DEV_ERROR(>dev, "Unable to map the %s registers\n", 
name);
return ERR_PTR(-EINVAL);
@@ -1613,13 +1615,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
struct device_node *node)
gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu");
if (IS_ERR(gmu->mmio)) {
ret = PTR_ERR(gmu->mmio);
-   goto err_mmio;
+   goto err_cleanup;
}
 
gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx");
if (IS_ERR(gmu->cxpd)) {
ret = PTR_ERR(gmu->cxpd);
-   goto err_mmio;
+   goto err_cleanup;
}
 
if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) {
@@ -1635,7 +1637,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
struct device_node *node)
gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx");
if (IS_ERR(gmu->gxpd)) {
ret = PTR_ERR(gmu->gxpd);
-   goto err_mmio;
+   goto err_cleanup;
}
 
gmu->initialized = true;
@@ -1645,9 +1647,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, 
struct device_node *node)
 detach_cxpd:
dev_pm_domain_detach(gmu->cxpd, false);
 
-err_mmio:
-   iounmap(gmu->mmio);
-
+err_cleanup:
/* Drop reference taken in of_find_device_by_node */
put_device(gmu->dev);
 
@@ -1762,7 +1762,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
device_node *node)
gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc");
if (IS_ERR(gmu->rscc)) {
ret = -ENODEV;
-   goto err_mmio;
+   goto err_cleanup;
}
} else {
gmu->rscc = gmu->mmio + 0x23000;
@@ -1774,13 +1774,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
device_node *node)
 
if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) {
ret = -ENODEV;
-   goto err_mmio;
+   goto err_cleanup;
}
 
gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx");
if (IS_ERR(gmu->cxpd)) {
ret = PTR_ERR(gmu->cxpd);
-   goto err_mmio;
+   goto err_cleanup;
}
 
link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME);
@@ -1824,10 +1824,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
device_node *node)
 detach_cxpd:
dev_pm_domain_detach(gmu->cxpd, false);
 
-err_mmio:
-   iounmap(gmu->mmio);
-   if (platform_get_resource_byname(pdev, IORESOURCE_MEM, "rscc"))
-   iounmap(gmu->rscc);
+err_cleanup:
free_irq(gmu->gmu_irq, gmu);
free_irq(gmu->hfi_irq, gmu);
 

---
base-commit: 1b294a1f35616977caddaddf3e9d28e576a1adbc