RE: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update

2022-06-14 Thread Kasireddy, Vivek
Hi DW,

> 
> On Thu, Jun 09, 2022 at 06:24:43AM +0200, Gerd Hoffmann wrote:
> > On Fri, Jun 03, 2022 at 02:18:49PM -0700, Dongwon Kim wrote:
> > > Having one fence for a vgfb would cause conflict in case there are
> > > multiple planes referencing the same vgfb (e.g. Xorg screen covering
> > > two displays in extended mode) being flushed simultaneously. So it makes
> > > sence to use a separated fence for each plane update to prevent this.
> > >
> > > vgfb->fence is not required anymore with the suggested code change so
> > > both prepare_fb and cleanup_fb are removed since only fence creation/
> > > freeing are done in there.
> >
> > The fences are allocated and released in prepare_fb + cleanup_fb for a
> > reason: atomic_update must not fail.
> 
> In case fence allocation fails, it falls back to non-fence path so it
> won't fail for primary-plane-update.
> 
> For cursor plane update, it returns if fence is NULL but we could change
> it to just proceed and just make it skip waiting like,
[Kasireddy, Vivek] But cursor plane update is always tied to a fence based on 
the
way it works now and we have to fail if there is no fence.

> 
> if (fence) {
> dma_fence_wait(&fence->f, true);
> dma_fence_put(&fence->f);
> }
> 
> Or maybe I can limit my suggested changes to primary-plane-update only.
> 
> What do you think about these?
> 
> >
> > I guess virtio-gpu must be fixed to use drm_plane_state->fence
> > correctly ...
> 
> I was thinking about this too but current functions (e.g.
> virtio_gpu_cmd_transfer_to_host_2d) takes "struct virtio_gpu_fence".
> Not sure what is the best way to connect drm_plane_state->fence to
> virtio_gpu_fence without changing major function interfaces.
[Kasireddy, Vivek] FWIU, we cannot use drm_plane_state->fence as it is used
by drm core to handle explicit fences. So, I think a cleaner way is to subclass
base drm_plane_state and move the fence from virtio_gpu_framebuffer to a
new struct virtio_gpu_plane_state. This way, we can create the fence in
prepare_fb() and use it for synchronization in resource_flush.

Thanks,
Vivek

> 
> >
> > take care,
> >   Gerd
> >


Re: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update

2022-06-14 Thread Dongwon Kim
On Thu, Jun 09, 2022 at 06:24:43AM +0200, Gerd Hoffmann wrote:
> On Fri, Jun 03, 2022 at 02:18:49PM -0700, Dongwon Kim wrote:
> > Having one fence for a vgfb would cause conflict in case there are
> > multiple planes referencing the same vgfb (e.g. Xorg screen covering
> > two displays in extended mode) being flushed simultaneously. So it makes
> > sence to use a separated fence for each plane update to prevent this.
> > 
> > vgfb->fence is not required anymore with the suggested code change so
> > both prepare_fb and cleanup_fb are removed since only fence creation/
> > freeing are done in there.
> 
> The fences are allocated and released in prepare_fb + cleanup_fb for a
> reason: atomic_update must not fail.

In case fence allocation fails, it falls back to non-fence path so it
won't fail for primary-plane-update.

For cursor plane update, it returns if fence is NULL but we could change
it to just proceed and just make it skip waiting like,

if (fence) {
dma_fence_wait(&fence->f, true);
dma_fence_put(&fence->f);
}   

Or maybe I can limit my suggested changes to primary-plane-update only.

What do you think about these?

> 
> I guess virtio-gpu must be fixed to use drm_plane_state->fence
> correctly ...

I was thinking about this too but current functions (e.g.
virtio_gpu_cmd_transfer_to_host_2d) takes "struct virtio_gpu_fence".
Not sure what is the best way to connect drm_plane_state->fence to
virtio_gpu_fence without changing major function interfaces.

> 
> take care,
>   Gerd
> 


Re: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update

2022-06-08 Thread Gerd Hoffmann
On Fri, Jun 03, 2022 at 02:18:49PM -0700, Dongwon Kim wrote:
> Having one fence for a vgfb would cause conflict in case there are
> multiple planes referencing the same vgfb (e.g. Xorg screen covering
> two displays in extended mode) being flushed simultaneously. So it makes
> sence to use a separated fence for each plane update to prevent this.
> 
> vgfb->fence is not required anymore with the suggested code change so
> both prepare_fb and cleanup_fb are removed since only fence creation/
> freeing are done in there.

The fences are allocated and released in prepare_fb + cleanup_fb for a
reason: atomic_update must not fail.

I guess virtio-gpu must be fixed to use drm_plane_state->fence
correctly ...

take care,
  Gerd



RE: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update

2022-06-06 Thread Kasireddy, Vivek
Hi DW,

> Subject: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update
> 
> Having one fence for a vgfb would cause conflict in case there are
> multiple planes referencing the same vgfb (e.g. Xorg screen covering
> two displays in extended mode) being flushed simultaneously. So it makes
> sence to use a separated fence for each plane update to prevent this.
> 
> vgfb->fence is not required anymore with the suggested code change so
> both prepare_fb and cleanup_fb are removed since only fence creation/
> freeing are done in there.
> 
> v2: - use the fence always as long as guest_blob is enabled on the
>   scanout object
> - obj and fence initialized as NULL ptrs to avoid uninitialzed
>   ptr problem (Reported by Dan Carpenter/kernel-test-robot)
> 
> Reported-by: kernel test robot 
> Reported-by: Dan Carpenter 
> Cc: Gurchetan Singh 
> Cc: Gerd Hoffmann 
> Cc: Vivek Kasireddy 
> Signed-off-by: Dongwon Kim 
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |   1 -
>  drivers/gpu/drm/virtio/virtgpu_plane.c | 103 ++---
>  2 files changed, 39 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
> b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 0a194aaad419..4c59c1e67ca5 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -186,7 +186,6 @@ struct virtio_gpu_output {
> 
>  struct virtio_gpu_framebuffer {
>   struct drm_framebuffer base;
> - struct virtio_gpu_fence *fence;
>  };
>  #define to_virtio_gpu_framebuffer(x) \
>   container_of(x, struct virtio_gpu_framebuffer, base)
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
> b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index 6d3cc9e238a4..821023b7d57d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -137,29 +137,37 @@ static void virtio_gpu_resource_flush(struct drm_plane 
> *plane,
>   struct virtio_gpu_device *vgdev = dev->dev_private;
>   struct virtio_gpu_framebuffer *vgfb;
>   struct virtio_gpu_object *bo;
> + struct virtio_gpu_object_array *objs = NULL;
> + struct virtio_gpu_fence *fence = NULL;
> 
>   vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
>   bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> - if (vgfb->fence) {
> - struct virtio_gpu_object_array *objs;
> 
> + if (!bo)
> + return;
[Kasireddy, Vivek] I think you can drop the above check as bo is guaranteed
to be valid in resource_flush as the necessary checks are already done early
in virtio_gpu_primary_plane_update().

> +
> + if (bo->dumb && bo->guest_blob)
> + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
> +0);
> +
> + if (fence) {
>   objs = virtio_gpu_array_alloc(1);
> - if (!objs)
> + if (!objs) {
> + kfree(fence);
>   return;
> + }
>   virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
>   virtio_gpu_array_lock_resv(objs);
> - virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
> -   width, height, objs, vgfb->fence);
> - virtio_gpu_notify(vgdev);
> + }
> +
> + virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
> +   width, height, objs, fence);
> + virtio_gpu_notify(vgdev);
[Kasireddy, Vivek] I think it is OK to retain the existing style where all the
statements relevant for if (fence) would be lumped together. I do understand 
that
the above two statements would be redundant in that case but it looks a bit 
cleaner.

> 
> - dma_fence_wait_timeout(&vgfb->fence->f, true,
> + if (fence) {
> + dma_fence_wait_timeout(&fence->f, true,
>  msecs_to_jiffies(50));
> - dma_fence_put(&vgfb->fence->f);
> - vgfb->fence = NULL;
> - } else {
> - virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
> -   width, height, NULL, NULL);
> - virtio_gpu_notify(vgdev);
> + dma_fence_put(&fence->f);
>   }
>  }
> 
> @@ -239,47 +247,6 @@ static void virtio_gpu_primary_plane_update(struct 
> drm_plane
> *plane,
> rect.y2 - rect.y1);
>  }
> 
> -static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
> -struct drm_plane_state *new_state)
> -{
> - struct drm_device *dev = plane->dev;
> - struct virtio_gpu_device *vgdev = dev->dev_private;
> - struct virtio_gpu_framebuffer *vgfb;
> - struct virtio_gpu_object *bo;
> -
> - if (!new_state->fb)
> - return 0;
> -
> - vgfb = to_virtio_gpu_framebuffer(new_state->fb);
> - bo = gem_to_virtio_gpu_obj(vgfb->base.obj[