Re: [Intel-gfx] [RFC PATCH 00/20] Initial Xe driver submission

2023-01-04 Thread Alyssa Rosenzweig
> > For one thing, setting that up would be a lot of up front infrastructure
> > work. I'm not sure how to even pull that off when Xe is still
> > out-of-tree and i915 development plunges on upstream as ever.
> > 
> > For another, realistically, the overlap between supported platforms is
> > going to end at some point, and eventually new platforms are only going
> > to be supported with Xe. That's going to open up new possibilities for
> > refactoring also the display code. I think it would be premature to lock
> > in to a common directory structure or a common helper module at this
> > point.
> > 
> > I'm not saying no to the idea, and we've contemplated it before, but I
> > think there are still too many moving parts to decide to go that way.
> 
> FWIW, I actually have the same dilemma with the driver for new Mali GPUs
> I'm working on. I initially started making it a sub-driver of the
> existing panfrost driver (some HW blocks are similar, like the
> IOMMU and a few other things, and some SW abstracts can be shared here
> and there, like the GEM allocator logic). But I'm now considering
> forking the driver (after Alyssa planted the seed :-)), not only
> because I want to start from a clean sheet on the the uAPI front
> (wouldn't be an issue in your case, because you're talking about
> sharing helpers, not the driver frontend), but also because any refactor
> to panfrost is a potential source of regression for existing users. So,
> I tend to agree with Jani here, trying to share code before things have
> settled down is likely to cause pain to both Xe and i915
> users+developers.

++

I pretend to have never written a kernel driver, so will not comment
there. But Boris and I were previously bit trying to share code between
our GL and VK drivers, before VK settled down, causing pain for both. I
don't want a kernelside repeat of that (for either Mali or Intel).

I tend to think that, if you're tempted to share a driver frontend
without the backend, that's a sign that there's too much boilerplate for
the frontend and maybe there needs to be more helpers somewhere. For Xe,
that doesn't apply since the hw overlaps between the drivers, but for
Mali, there really is more different than similar and there's an
obvious, acute break between "old Mali" and "new Mali". The shared
"instantiate a DRM driver boilerplate" is pretty trivial, and the MMU
code is as simple as it gets...


Re: [Intel-gfx] [PATCH v6 04/22] drm/panfrost: Fix shrinker list corruption by madvise IOCTL

2022-06-21 Thread Alyssa Rosenzweig
Acked-by: Alyssa Rosenzweig 

On Fri, May 27, 2022 at 02:50:22AM +0300, Dmitry Osipenko wrote:
> Calling madvise IOCTL twice on BO causes memory shrinker list corruption
> and crashes kernel because BO is already on the list and it's added to
> the list again, while BO should be removed from from the list before it's
> re-added. Fix it.
> 
> Cc: sta...@vger.kernel.org
> Fixes: 013b65101315 ("drm/panfrost: Add madvise and shrinker support")
> Signed-off-by: Dmitry Osipenko 
> ---
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c 
> b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 087e69b98d06..b1e6d238674f 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -433,8 +433,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, 
> void *data,
>  
>   if (args->retained) {
>   if (args->madv == PANFROST_MADV_DONTNEED)
> - list_add_tail(&bo->base.madv_list,
> -   &pfdev->shrinker_list);
> + list_move_tail(&bo->base.madv_list,
> +&pfdev->shrinker_list);
>   else if (args->madv == PANFROST_MADV_WILLNEED)
>   list_del_init(&bo->base.madv_list);
>   }
> -- 
> 2.35.3
> 


Re: [Intel-gfx] [PATCH v6 22/22] drm/panfrost: Switch to generic memory shrinker

2022-06-21 Thread Alyssa Rosenzweig
Acked-by: Alyssa Rosenzweig 

On Fri, May 27, 2022 at 02:50:40AM +0300, Dmitry Osipenko wrote:
> Replace Panfrost's memory shrinker with a generic drm-shmem memory
> shrinker.
> 
> Tested-by: Steven Price 
> Signed-off-by: Dmitry Osipenko 
> ---
>  drivers/gpu/drm/panfrost/Makefile |   1 -
>  drivers/gpu/drm/panfrost/panfrost_device.h|   4 -
>  drivers/gpu/drm/panfrost/panfrost_drv.c   |  19 +--
>  drivers/gpu/drm/panfrost/panfrost_gem.c   |  33 +++--
>  drivers/gpu/drm/panfrost/panfrost_gem.h   |   9 --
>  .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 129 --
>  drivers/gpu/drm/panfrost/panfrost_job.c   |  18 ++-
>  7 files changed, 42 insertions(+), 171 deletions(-)
>  delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> 
> diff --git a/drivers/gpu/drm/panfrost/Makefile 
> b/drivers/gpu/drm/panfrost/Makefile
> index b71935862417..ecf0864cb515 100644
> --- a/drivers/gpu/drm/panfrost/Makefile
> +++ b/drivers/gpu/drm/panfrost/Makefile
> @@ -5,7 +5,6 @@ panfrost-y := \
>   panfrost_device.o \
>   panfrost_devfreq.o \
>   panfrost_gem.o \
> - panfrost_gem_shrinker.o \
>   panfrost_gpu.o \
>   panfrost_job.o \
>   panfrost_mmu.o \
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h 
> b/drivers/gpu/drm/panfrost/panfrost_device.h
> index 8b25278f34c8..fe04b21fc044 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> @@ -115,10 +115,6 @@ struct panfrost_device {
>   atomic_t pending;
>   } reset;
>  
> - struct mutex shrinker_lock;
> - struct list_head shrinker_list;
> - struct shrinker shrinker;
> -
>   struct panfrost_devfreq pfdevfreq;
>  };
>  
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c 
> b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 859e240161d1..b77c99ba2475 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -160,7 +160,6 @@ panfrost_lookup_bos(struct drm_device *dev,
>   break;
>   }
>  
> - atomic_inc(&bo->gpu_usecount);
>   job->mappings[i] = mapping;
>   }
>  
> @@ -392,7 +391,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, 
> void *data,
>  {
>   struct panfrost_file_priv *priv = file_priv->driver_priv;
>   struct drm_panfrost_madvise *args = data;
> - struct panfrost_device *pfdev = dev->dev_private;
>   struct drm_gem_object *gem_obj;
>   struct panfrost_gem_object *bo;
>   int ret = 0;
> @@ -409,7 +407,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, 
> void *data,
>   if (ret)
>   goto out_put_object;
>  
> - mutex_lock(&pfdev->shrinker_lock);
>   mutex_lock(&bo->mappings.lock);
>   if (args->madv == PANFROST_MADV_DONTNEED) {
>   struct panfrost_gem_mapping *first;
> @@ -435,17 +432,8 @@ static int panfrost_ioctl_madvise(struct drm_device 
> *dev, void *data,
>  
>   args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
>  
> - if (args->retained) {
> - if (args->madv == PANFROST_MADV_DONTNEED)
> - list_move_tail(&bo->base.madv_list,
> -&pfdev->shrinker_list);
> - else if (args->madv == PANFROST_MADV_WILLNEED)
> - list_del_init(&bo->base.madv_list);
> - }
> -
>  out_unlock_mappings:
>   mutex_unlock(&bo->mappings.lock);
> - mutex_unlock(&pfdev->shrinker_lock);
>   dma_resv_unlock(bo->base.base.resv);
>  out_put_object:
>   drm_gem_object_put(gem_obj);
> @@ -577,9 +565,6 @@ static int panfrost_probe(struct platform_device *pdev)
>   ddev->dev_private = pfdev;
>   pfdev->ddev = ddev;
>  
> - mutex_init(&pfdev->shrinker_lock);
> - INIT_LIST_HEAD(&pfdev->shrinker_list);
> -
>   err = panfrost_device_init(pfdev);
>   if (err) {
>   if (err != -EPROBE_DEFER)
> @@ -601,7 +586,7 @@ static int panfrost_probe(struct platform_device *pdev)
>   if (err < 0)
>   goto err_out1;
>  
> - panfrost_gem_shrinker_init(ddev);
> + drm_gem_shmem_shrinker_register(ddev);
>  
>   return 0;
>  
> @@ -619,8 +604,8 @@ static int panfrost_remove(struct platform_device *pdev)
>   struct panfrost_device *pfdev = platform_get_drvdata(pdev);
>   struct drm_device *ddev = pfdev->ddev;
>  
> + drm_gem_shmem_shrinker_unregister(ddev);

Re: [Intel-gfx] [PATCH v5 07/20] drm/panfrost: use scheduler dependency tracking

2021-08-05 Thread Alyssa Rosenzweig
Acked-by: Alyssa Rosenzweig 

On Thu, Aug 05, 2021 at 12:46:52PM +0200, Daniel Vetter wrote:
> Just deletes some code that's now more shared.
> 
> Note that thanks to the split into drm_sched_job_init/arm we can now
> easily pull the _init() part from under the submission lock way ahead
> where we're adding the sync file in-fences as dependencies.
> 
> v2: Correctly clean up the partially set up job, now that job_init()
> and job_arm() are apart (Emma).
> 
> v3: Rebased over renamed functions for adding depdencies
> 
> Acked-by: Emma Anholt 
> Reviewed-by: Steven Price  (v3)
> Signed-off-by: Daniel Vetter 
> Cc: Rob Herring 
> Cc: Tomeu Vizoso 
> Cc: Steven Price 
> Cc: Alyssa Rosenzweig 
> Cc: Sumit Semwal 
> Cc: "Christian K??nig" 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Cc: Emma Anholt 
> ---
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 16 ---
>  drivers/gpu/drm/panfrost/panfrost_job.c | 38 -
>  drivers/gpu/drm/panfrost/panfrost_job.h |  5 +---
>  3 files changed, 18 insertions(+), 41 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c 
> b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 1ffaef5ec5ff..16212b6b202e 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -218,7 +218,7 @@ panfrost_copy_in_sync(struct drm_device *dev,
>   if (ret)
>   goto fail;
>  
> - ret = drm_gem_fence_array_add(&job->deps, fence);
> + ret = drm_sched_job_add_dependency(&job->base, fence);
>  
>   if (ret)
>   goto fail;
> @@ -236,7 +236,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, 
> void *data,
>   struct drm_panfrost_submit *args = data;
>   struct drm_syncobj *sync_out = NULL;
>   struct panfrost_job *job;
> - int ret = 0;
> + int ret = 0, slot;
>  
>   if (!args->jc)
>   return -EINVAL;
> @@ -258,14 +258,20 @@ static int panfrost_ioctl_submit(struct drm_device 
> *dev, void *data,
>  
>   kref_init(&job->refcount);
>  
> - xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> -
>   job->pfdev = pfdev;
>   job->jc = args->jc;
>   job->requirements = args->requirements;
>   job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
>   job->file_priv = file->driver_priv;
>  
> + slot = panfrost_job_get_slot(job);
> +
> + ret = drm_sched_job_init(&job->base,
> +  &job->file_priv->sched_entity[slot],
> +  NULL);
> + if (ret)
> + goto fail_job_put;
> +
>   ret = panfrost_copy_in_sync(dev, file, args, job);
>   if (ret)
>   goto fail_job;
> @@ -283,6 +289,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, 
> void *data,
>   drm_syncobj_replace_fence(sync_out, job->render_done_fence);
>  
>  fail_job:
> + drm_sched_job_cleanup(&job->base);
> +fail_job_put:
>   panfrost_job_put(job);
>  fail_out_sync:
>   if (sync_out)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 4bc962763e1f..a98f507dc779 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -102,7 +102,7 @@ static struct dma_fence *panfrost_fence_create(struct 
> panfrost_device *pfdev, in
>   return &fence->base;
>  }
>  
> -static int panfrost_job_get_slot(struct panfrost_job *job)
> +int panfrost_job_get_slot(struct panfrost_job *job)
>  {
>   /* JS0: fragment jobs.
>* JS1: vertex/tiler jobs
> @@ -242,13 +242,14 @@ static void panfrost_job_hw_submit(struct panfrost_job 
> *job, int js)
>  
>  static int panfrost_acquire_object_fences(struct drm_gem_object **bos,
> int bo_count,
> -   struct xarray *deps)
> +   struct drm_sched_job *job)
>  {
>   int i, ret;
>  
>   for (i = 0; i < bo_count; i++) {
>   /* panfrost always uses write mode in its current uapi */
> - ret = drm_gem_fence_array_add_implicit(deps, bos[i], true);
> + ret = drm_sched_job_add_implicit_dependencies(job, bos[i],
> +   true);
>   if (ret)
>   return ret;
>   }
> @@ -269,31 +270,21 @@ static void panfrost_attach_object_fences(st