On Mon, Dec 19, 2016 at 10:40:41AM +0900, Inki Dae wrote:
> 
> 
> 2016년 08월 16일 01:02에 Daniel Vetter 이(가) 쓴 글:
> > On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote:
> >> Rendering operations to the dma-buf are tracked implicitly via the
> >> reservation_object (dmabuf->resv). This is used to allow poll() to
> >> wait upon outstanding rendering (or just query the current status of
> >> rendering). The dma-buf sync ioctl allows userspace to prepare the
> >> dma-buf for CPU access, which should include waiting upon rendering.
> >> (Some drivers may need to do more work to ensure that the dma-buf mmap
> >> is coherent as well as complete.)
> >>
> >> v2: Always wait upon the reservation object implicitly. We choose to do
> >> it after the native handler in case it can do so more efficiently.
> >>
> >> Testcase: igt/prime_vgem
> >> Testcase: igt/gem_concurrent_blit # *vgem*
> >> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> >> Cc: Sumit Semwal <sumit.semwal at linaro.org>
> >> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> >> Cc: Eric Anholt <eric at anholt.net>
> >> Cc: linux-media at vger.kernel.org
> >> Cc: dri-devel at lists.freedesktop.org
> >> Cc: linaro-mm-sig at lists.linaro.org
> >> Cc: linux-kernel at vger.kernel.org
> >> ---
> >>  drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
> >>  1 file changed, 23 insertions(+)
> >>
> >> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >> index ddaee60ae52a..cf04d249a6a4 100644
> >> --- a/drivers/dma-buf/dma-buf.c
> >> +++ b/drivers/dma-buf/dma-buf.c
> >> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct 
> >> dma_buf_attachment *attach,
> >>  }
> >>  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
> >>  
> >> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >> +                                enum dma_data_direction direction)
> >> +{
> >> +  bool write = (direction == DMA_BIDIRECTIONAL ||
> >> +                direction == DMA_TO_DEVICE);
> >> +  struct reservation_object *resv = dmabuf->resv;
> >> +  long ret;
> >> +
> >> +  /* Wait on any implicit rendering fences */
> >> +  ret = reservation_object_wait_timeout_rcu(resv, write, true,
> >> +                                            MAX_SCHEDULE_TIMEOUT);
> >> +  if (ret < 0)
> >> +          return ret;
> >> +
> >> +  return 0;
> >> +}
> >>  
> >>  /**
> >>   * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf 
> >> from the
> >> @@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >>    if (dmabuf->ops->begin_cpu_access)
> >>            ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
> >>  
> >> +  /* Ensure that all fences are waited upon - but we first allow
> >> +   * the native handler the chance to do so more efficiently if it
> >> +   * chooses. A double invocation here will be reasonably cheap no-op.
> >> +   */
> >> +  if (ret == 0)
> >> +          ret = __dma_buf_begin_cpu_access(dmabuf, direction);
> > 
> > Not sure we should wait first and the flush or the other way round. But I
> > don't think it'll matter for any current dma-buf exporter, so meh.
> > 
> 
> Sorry for late comment. I wonder there is no problem in case that GPU or 
> other DMA device tries to access this dma buffer after 
> dma_buf_begin_cpu_access call.
> I think in this case, they - GPU or DMA devices - would make a mess of the 
> dma buffer while CPU is accessing the buffer.
> 
> This patch is in mainline already so if this is real problem then I think we 
> sould choose,
> 1. revert this patch from mainline

That scenario is irrespective of this patch. It just depends on there
being concurrent CPU access with destructive DMA access (or vice-versa).

> 2. make sure to prevent other DMA devices to try to access the buffer while 
> CPU is accessing the buffer.

Is the safeguard you want, and the one employed elsewhere, which you could
accomplish by adding a fence to the reservation object for the CPU access
in begin_access and signaling from end_access. It would need to be an
autosignaled fence because userspace may forget to end its access
(or otherwise be terminated whilst holding the fence).

Everyone using the mmap without begin/end can of course still reek havoc
on the buffer.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

Reply via email to