Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-07 Thread Chris Wilson
Quoting lepton (2020-07-07 18:05:21)
> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:
> >
> > If we assign obj->filp, we believe that the create vgem bo is native and
> > allow direct operations like mmap() assuming it behaves as backed by a
> > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > not always meaningful and the shmemfs backing store misleading.
> >
> > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > and that rejects attempts to mmap an imported dmabuf,
> What do you mean by "regular mmap access" here?  It looks like vgem is
> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-07 Thread Chris Wilson
Quoting lepton (2020-07-07 19:17:51)
> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:
> >
> > Quoting lepton (2020-07-07 18:05:21)
> > > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  
> > > wrote:
> > > >
> > > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > > allow direct operations like mmap() assuming it behaves as backed by a
> > > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > > not always meaningful and the shmemfs backing store misleading.
> > > >
> > > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > > and that rejects attempts to mmap an imported dmabuf,
> > > What do you mean by "regular mmap access" here?  It looks like vgem is
> > > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > > drm_gem_dumb_map_offset
> >
> > As I too found out, and so had to correct my story telling.
> >
> > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > igt to see how it was being used.
> Now it seems your fix is to disable "regular mmap" on imported dma buf
> for vgem. I am not really a graphic guy, but then the api looks like:
> for a gem handle, user space has to guess to find out the way to mmap
> it. If user space guess wrong, then it will fail to mmap. Is this the
> expected way
> for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread lepton
On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
>
> drm_gem_dumb_map_offset():
> if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
> if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu 
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson 
> Cc: Lepton Wu 
> Cc: Daniel Vetter 
> Cc: Christian König 
> Cc: Thomas Hellström (Intel) 
> Cc:  # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +--
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
> ret = 0;
> }
> mutex_unlock(&obj->pages_lock);
> -   if (ret) {
> +   if (ret && obj->base.filp) {
> struct page *page;
>
> page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct 
> drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -   unsigned long size)
> +struct file *shmem,
> +unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> int ret;
Remove this, it's not used any more.
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object 
> *__vgem_gem_create(struct drm_device *dev,
> if (!obj)
> return ERR_PTR(-ENOMEM);
>
> -   ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -   if (ret) {
> -   kfree(obj);
> -   return ERR_PTR(ret);
> -   }
> +   drm_gem_private_object_init(dev, &obj->base, size);
> +   obj->base.filp = shmem;
>
> mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct 
> drm_device *dev,
>   unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> +   struct file *shmem;
> int ret;
>
> -   obj = __vgem_gem_create(dev, size);
> -   if (IS_ERR(obj))
> +   size = roundup(size, PAGE_SIZE);
> +
> +   shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +   if (IS_ERR(shmem))
> +   return ERR_CAST(shmem);
> +
> +   obj = __vgem_gem_create(dev, shmem, size);
> +   if (IS_ERR(obj)) {
> +   fput(shmem);
> return ERR_CAST(obj);
> +   }
>
> ret = drm_gem_handle_create(file, &obj->base, handle);
> if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object 
> *vgem_prime_import_sg_table(struct drm_device *dev,
> struct drm_vgem_gem_object *obj;
> int npages;
>
> -   obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +   obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
> if (IS_ERR(obj))
> return ERR_CAST(obj);
>
> --
> 2.27.0
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread lepton
On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:
>
> Quoting lepton (2020-07-07 18:05:21)
> > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  
> > wrote:
> > >
> > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > allow direct operations like mmap() assuming it behaves as backed by a
> > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > not always meaningful and the shmemfs backing store misleading.
> > >
> > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > and that rejects attempts to mmap an imported dmabuf,
> > What do you mean by "regular mmap access" here?  It looks like vgem is
> > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > drm_gem_dumb_map_offset
>
> As I too found out, and so had to correct my story telling.
>
> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?
> -Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread lepton
On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset
>
> drm_gem_dumb_map_offset():
> if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
> if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu 
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson 
> Cc: Lepton Wu 
> Cc: Daniel Vetter 
> Cc: Christian König 
> Cc: Thomas Hellström (Intel) 
> Cc:  # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +--
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
> ret = 0;
> }
> mutex_unlock(&obj->pages_lock);
> -   if (ret) {
> +   if (ret && obj->base.filp) {
> struct page *page;
>
> page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct 
> drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -   unsigned long size)
> +struct file *shmem,
> +unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> int ret;
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object 
> *__vgem_gem_create(struct drm_device *dev,
> if (!obj)
> return ERR_PTR(-ENOMEM);
>
> -   ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -   if (ret) {
> -   kfree(obj);
> -   return ERR_PTR(ret);
> -   }
> +   drm_gem_private_object_init(dev, &obj->base, size);
> +   obj->base.filp = shmem;
>
> mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct 
> drm_device *dev,
>   unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> +   struct file *shmem;
> int ret;
>
> -   obj = __vgem_gem_create(dev, size);
> -   if (IS_ERR(obj))
> +   size = roundup(size, PAGE_SIZE);
> +
> +   shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +   if (IS_ERR(shmem))
> +   return ERR_CAST(shmem);
> +
> +   obj = __vgem_gem_create(dev, shmem, size);
> +   if (IS_ERR(obj)) {
> +   fput(shmem);
> return ERR_CAST(obj);
> +   }
>
> ret = drm_gem_handle_create(file, &obj->base, handle);
> if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object 
> *vgem_prime_import_sg_table(struct drm_device *dev,
> struct drm_vgem_gem_object *obj;
> int npages;
>
> -   obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +   obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
> if (IS_ERR(obj))
> return ERR_CAST(obj);
>
> --
> 2.27.0
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread Christian König

Am 07.07.20 um 20:35 schrieb Chris Wilson:

Quoting lepton (2020-07-07 19:17:51)

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:

Quoting lepton (2020-07-07 18:05:21)

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.

Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.


Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an 
importer is illegal.


What we could maybe try to do is to redirect this mmap() API call on the 
importer to the exporter, but I'm pretty sure that the fs layer wouldn't 
like that without changes.


Regards,
Christian.



-Chris


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread Daniel Vetter
On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > Quoting lepton (2020-07-07 19:17:51)
> > > On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  
> > > wrote:
> > > > Quoting lepton (2020-07-07 18:05:21)
> > > > > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
> > > > >  wrote:
> > > > > > If we assign obj->filp, we believe that the create vgem bo is 
> > > > > > native and
> > > > > > allow direct operations like mmap() assuming it behaves as backed 
> > > > > > by a
> > > > > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > > > > not always meaningful and the shmemfs backing store misleading.
> > > > > > 
> > > > > > Note, that regular mmap access to a vgem bo is via the dumb buffer 
> > > > > > API,
> > > > > > and that rejects attempts to mmap an imported dmabuf,
> > > > > What do you mean by "regular mmap access" here?  It looks like vgem is
> > > > > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't 
> > > > > call
> > > > > drm_gem_dumb_map_offset
> > > > As I too found out, and so had to correct my story telling.
> > > > 
> > > > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > > > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > > > igt to see how it was being used.
> > > Now it seems your fix is to disable "regular mmap" on imported dma buf
> > > for vgem. I am not really a graphic guy, but then the api looks like:
> > > for a gem handle, user space has to guess to find out the way to mmap
> > > it. If user space guess wrong, then it will fail to mmap. Is this the
> > > expected way
> > > for people to handle gpu buffer?
> > You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > handle, you have to use the dumb buffer API, there's no other way to
> > mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > two are clear.
> > 
> > It's when you import the dma-buf into vgem and create a handle out of
> > it, that's when the handle is no longer first class and certain uAPI
> > [the dumb buffer API in particular] fail.
> > 
> > It's not brilliant, as you say, it requires the user to remember the
> > difference between the handles, but at the same time it does prevent
> > them falling into coherency traps by forcing them to use the right
> > driver to handle the object, and have to consider the additional ioctls
> > that go along with that access.
> 
> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> is illegal.
> 
> What we could maybe try to do is to redirect this mmap() API call on the
> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.

Of course there's still the problem that many drivers don't forward the
cache coherency calls for begin/end cpu access, so in a bunch of cases
you'll cache cacheline dirt soup. But that's kinda standard procedure for
dma-buf :-P

But yeah trying to handle the mmap as an importer, bypassing the export:
nope. The one exception is if you have some kind of fancy gart with
cpu-visible pci bar (like at least integrated intel gpus have). But in
that case the mmap very much looks&acts like device access in every way.

Cheers, Daniel

> Regards,
> Christian.
> 
> 
> > -Chris
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread Christian König

Am 08.07.20 um 11:54 schrieb Daniel Vetter:

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:

Am 07.07.20 um 20:35 schrieb Chris Wilson:

Quoting lepton (2020-07-07 19:17:51)

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:

Quoting lepton (2020-07-07 18:05:21)

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.

Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.

Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
is illegal.

What we could maybe try to do is to redirect this mmap() API call on the
importer to the exporter, but I'm pretty sure that the fs layer wouldn't
like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.


Oh, that most likely won't work correctly with unpinned DMA-bufs and 
needs to be avoided.


Each file descriptor is associated with an struct address_space. And 
when you mmap() through the importer by redirecting the system call to 
the exporter you end up with the wrong struct address_space in your VMA.


That in turn can go up easily in flames when the exporter tries to 
invalidate the CPU mappings for its DMA-buf while moving it.


Where are we doing this? My last status was that this is forbidden.

Christian.


Of course there's still the problem that many drivers don't forward the
cache coherency calls for begin/end cpu access, so in a bunch of cases
you'll cache cacheline dirt soup. But that's kinda standard procedure for
dma-buf :-P

But yeah trying to handle the mmap as an importer, bypassing the export:
nope. The one exception is if you have some kind of fancy gart with
cpu-visible pci bar (like at least integrated intel gpus have). But in
that case the mmap very much looks&acts like device access in every way.

Cheers, Daniel


Regards,
Christian.



-Chris


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread Daniel Vetter
On Wed, Jul 8, 2020 at 4:37 PM Christian König  wrote:
>
> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> >>> Quoting lepton (2020-07-07 19:17:51)
>  On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  
>  wrote:
> > Quoting lepton (2020-07-07 18:05:21)
> >> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  
> >> wrote:
> >>> If we assign obj->filp, we believe that the create vgem bo is native 
> >>> and
> >>> allow direct operations like mmap() assuming it behaves as backed by a
> >>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> >>> not always meaningful and the shmemfs backing store misleading.
> >>>
> >>> Note, that regular mmap access to a vgem bo is via the dumb buffer 
> >>> API,
> >>> and that rejects attempts to mmap an imported dmabuf,
> >> What do you mean by "regular mmap access" here?  It looks like vgem is
> >> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't 
> >> call
> >> drm_gem_dumb_map_offset
> > As I too found out, and so had to correct my story telling.
> >
> > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > igt to see how it was being used.
>  Now it seems your fix is to disable "regular mmap" on imported dma buf
>  for vgem. I am not really a graphic guy, but then the api looks like:
>  for a gem handle, user space has to guess to find out the way to mmap
>  it. If user space guess wrong, then it will fail to mmap. Is this the
>  expected way
>  for people to handle gpu buffer?
> >>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> >>> handle, you have to use the dumb buffer API, there's no other way to
> >>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> >>> two are clear.
> >>>
> >>> It's when you import the dma-buf into vgem and create a handle out of
> >>> it, that's when the handle is no longer first class and certain uAPI
> >>> [the dumb buffer API in particular] fail.
> >>>
> >>> It's not brilliant, as you say, it requires the user to remember the
> >>> difference between the handles, but at the same time it does prevent
> >>> them falling into coherency traps by forcing them to use the right
> >>> driver to handle the object, and have to consider the additional ioctls
> >>> that go along with that access.
> >> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> >> is illegal.
> >>
> >> What we could maybe try to do is to redirect this mmap() API call on the
> >> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> >> like that without changes.
> > We already do that, there's a full helper-ified path from I think shmem
> > helpers through prime helpers to forward this all. Including handling
> > buffer offsets and all the other lolz back&forth.
>
> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> needs to be avoided.
>
> Each file descriptor is associated with an struct address_space. And
> when you mmap() through the importer by redirecting the system call to
> the exporter you end up with the wrong struct address_space in your VMA.
>
> That in turn can go up easily in flames when the exporter tries to
> invalidate the CPU mappings for its DMA-buf while moving it.
>
> Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?
-Daniel

> Christian.
>
> > Of course there's still the problem that many drivers don't forward the
> > cache coherency calls for begin/end cpu access, so in a bunch of cases
> > you'll cache cacheline dirt soup. But that's kinda standard procedure for
> > dma-buf :-P
> >
> > But yeah trying to handle the mmap as an importer, bypassing the export:
> > nope. The one exception is if you have some kind of fancy gart with
> > cpu-visible pci bar (like at least integrated intel gpus have). But in
> > that case the mmap very much looks&acts like device access in every way.
> >
> > Cheers, Daniel
> >
> >> Regards,
> >> Christian.
> >>
> >>
> >>> -Chris
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
h

Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread Christian König

Am 08.07.20 um 17:01 schrieb Daniel Vetter:

On Wed, Jul 8, 2020 at 4:37 PM Christian König  wrote:

Am 08.07.20 um 11:54 schrieb Daniel Vetter:

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:

Am 07.07.20 um 20:35 schrieb Chris Wilson:

Quoting lepton (2020-07-07 19:17:51)

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:

Quoting lepton (2020-07-07 18:05:21)

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.

Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.

Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
is illegal.

What we could maybe try to do is to redirect this mmap() API call on the
importer to the exporter, but I'm pretty sure that the fs layer wouldn't
like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.

Oh, that most likely won't work correctly with unpinned DMA-bufs and
needs to be avoided.

Each file descriptor is associated with an struct address_space. And
when you mmap() through the importer by redirecting the system call to
the exporter you end up with the wrong struct address_space in your VMA.

That in turn can go up easily in flames when the exporter tries to
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?


Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and 
most likely could blow up badly.


Christian.


-Daniel


Christian.


Of course there's still the problem that many drivers don't forward the
cache coherency calls for begin/end cpu access, so in a bunch of cases
you'll cache cacheline dirt soup. But that's kinda standard procedure for
dma-buf :-P

But yeah trying to handle the mmap as an importer, bypassing the export:
nope. The one exception is if you have some kind of fancy gart with
cpu-visible pci bar (like at least integrated intel gpus have). But in
that case the mmap very much looks&acts like device access in every way.

Cheers, Daniel


Regards,
Christian.



-Chris




___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread Daniel Vetter
On Wed, Jul 8, 2020 at 5:05 PM Christian König  wrote:
>
> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > On Wed, Jul 8, 2020 at 4:37 PM Christian König  
> > wrote:
> >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>  Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > Quoting lepton (2020-07-07 19:17:51)
> >> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
> >>  wrote:
> >>> Quoting lepton (2020-07-07 18:05:21)
>  On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
>   wrote:
> > If we assign obj->filp, we believe that the create vgem bo is 
> > native and
> > allow direct operations like mmap() assuming it behaves as backed 
> > by a
> > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > not always meaningful and the shmemfs backing store misleading.
> >
> > Note, that regular mmap access to a vgem bo is via the dumb buffer 
> > API,
> > and that rejects attempts to mmap an imported dmabuf,
>  What do you mean by "regular mmap access" here?  It looks like vgem 
>  is
>  using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't 
>  call
>  drm_gem_dumb_map_offset
> >>> As I too found out, and so had to correct my story telling.
> >>>
> >>> By regular mmap() access I mean mmap on the vgem bo [via the dumb 
> >>> buffer
> >>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> >>> igt to see how it was being used.
> >> Now it seems your fix is to disable "regular mmap" on imported dma buf
> >> for vgem. I am not really a graphic guy, but then the api looks like:
> >> for a gem handle, user space has to guess to find out the way to mmap
> >> it. If user space guess wrong, then it will fail to mmap. Is this the
> >> expected way
> >> for people to handle gpu buffer?
> > You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > handle, you have to use the dumb buffer API, there's no other way to
> > mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > two are clear.
> >
> > It's when you import the dma-buf into vgem and create a handle out of
> > it, that's when the handle is no longer first class and certain uAPI
> > [the dumb buffer API in particular] fail.
> >
> > It's not brilliant, as you say, it requires the user to remember the
> > difference between the handles, but at the same time it does prevent
> > them falling into coherency traps by forcing them to use the right
> > driver to handle the object, and have to consider the additional ioctls
> > that go along with that access.
>  Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an 
>  importer
>  is illegal.
> 
>  What we could maybe try to do is to redirect this mmap() API call on the
>  importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>  like that without changes.
> >>> We already do that, there's a full helper-ified path from I think shmem
> >>> helpers through prime helpers to forward this all. Including handling
> >>> buffer offsets and all the other lolz back&forth.
> >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> >> needs to be avoided.
> >>
> >> Each file descriptor is associated with an struct address_space. And
> >> when you mmap() through the importer by redirecting the system call to
> >> the exporter you end up with the wrong struct address_space in your VMA.
> >>
> >> That in turn can go up easily in flames when the exporter tries to
> >> invalidate the CPU mappings for its DMA-buf while moving it.
> >>
> >> Where are we doing this? My last status was that this is forbidden.
> > Hm I thought we're doing all that already, but looking at the code
> > again we're only doing this when opening a new drm fd or dma-buf fd.
> > So the right file->f_mapping is always set at file creation time.
> >
> > And we indeed don't frob this more when going another indirection ...
> > Maybe we screwed up something somewhere :-/
> >
> > Also I thought the mapping is only taken after the vma is instatiated,
> > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > whether we could adjust vm_file too. Or is that the thing that's
> > forbidden?
>
> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>
> But changing vma->vm_file, that's something I haven't seen before and
> most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove t

Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread Daniel Vetter
On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter  wrote:
>
> On Wed, Jul 8, 2020 at 5:05 PM Christian König  
> wrote:
> >
> > Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > > On Wed, Jul 8, 2020 at 4:37 PM Christian König  
> > > wrote:
> > >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >  Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > > Quoting lepton (2020-07-07 19:17:51)
> > >> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
> > >>  wrote:
> > >>> Quoting lepton (2020-07-07 18:05:21)
> >  On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
> >   wrote:
> > > If we assign obj->filp, we believe that the create vgem bo is 
> > > native and
> > > allow direct operations like mmap() assuming it behaves as backed 
> > > by a
> > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > not always meaningful and the shmemfs backing store misleading.
> > >
> > > Note, that regular mmap access to a vgem bo is via the dumb 
> > > buffer API,
> > > and that rejects attempts to mmap an imported dmabuf,
> >  What do you mean by "regular mmap access" here?  It looks like 
> >  vgem is
> >  using vgem_gem_dumb_map as .dumb_map_offset callback then it 
> >  doesn't call
> >  drm_gem_dumb_map_offset
> > >>> As I too found out, and so had to correct my story telling.
> > >>>
> > >>> By regular mmap() access I mean mmap on the vgem bo [via the dumb 
> > >>> buffer
> > >>> API] as opposed to mmap() via an exported dma-buf fd. I had to look 
> > >>> at
> > >>> igt to see how it was being used.
> > >> Now it seems your fix is to disable "regular mmap" on imported dma 
> > >> buf
> > >> for vgem. I am not really a graphic guy, but then the api looks like:
> > >> for a gem handle, user space has to guess to find out the way to mmap
> > >> it. If user space guess wrong, then it will fail to mmap. Is this the
> > >> expected way
> > >> for people to handle gpu buffer?
> > > You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > > handle, you have to use the dumb buffer API, there's no other way to
> > > mmap it. If you have the dma-buf fd, you should mmap it directly. 
> > > Those
> > > two are clear.
> > >
> > > It's when you import the dma-buf into vgem and create a handle out of
> > > it, that's when the handle is no longer first class and certain uAPI
> > > [the dumb buffer API in particular] fail.
> > >
> > > It's not brilliant, as you say, it requires the user to remember the
> > > difference between the handles, but at the same time it does prevent
> > > them falling into coherency traps by forcing them to use the right
> > > driver to handle the object, and have to consider the additional 
> > > ioctls
> > > that go along with that access.
> >  Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an 
> >  importer
> >  is illegal.
> > 
> >  What we could maybe try to do is to redirect this mmap() API call on 
> >  the
> >  importer to the exporter, but I'm pretty sure that the fs layer 
> >  wouldn't
> >  like that without changes.
> > >>> We already do that, there's a full helper-ified path from I think shmem
> > >>> helpers through prime helpers to forward this all. Including handling
> > >>> buffer offsets and all the other lolz back&forth.
> > >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> > >> needs to be avoided.
> > >>
> > >> Each file descriptor is associated with an struct address_space. And
> > >> when you mmap() through the importer by redirecting the system call to
> > >> the exporter you end up with the wrong struct address_space in your VMA.
> > >>
> > >> That in turn can go up easily in flames when the exporter tries to
> > >> invalidate the CPU mappings for its DMA-buf while moving it.
> > >>
> > >> Where are we doing this? My last status was that this is forbidden.
> > > Hm I thought we're doing all that already, but looking at the code
> > > again we're only doing this when opening a new drm fd or dma-buf fd.
> > > So the right file->f_mapping is always set at file creation time.
> > >
> > > And we indeed don't frob this more when going another indirection ...
> > > Maybe we screwed up something somewhere :-/
> > >
> > > Also I thought the mapping is only taken after the vma is instatiated,
> > > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > > whether we could adjust vm_file too. Or is that the thing that's
> > > forbidden?
> >
> > Yes, exactly. Modifying vm_pgoff is harmless, tons of co

Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-09 Thread Christian König

Am 08.07.20 um 18:19 schrieb Daniel Vetter:

On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter  wrote:

On Wed, Jul 8, 2020 at 5:05 PM Christian König  wrote:

Am 08.07.20 um 17:01 schrieb Daniel Vetter:

On Wed, Jul 8, 2020 at 4:37 PM Christian König  wrote:

Am 08.07.20 um 11:54 schrieb Daniel Vetter:

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:

Am 07.07.20 um 20:35 schrieb Chris Wilson:

Quoting lepton (2020-07-07 19:17:51)

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:

Quoting lepton (2020-07-07 18:05:21)

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.

Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.

Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
is illegal.

What we could maybe try to do is to redirect this mmap() API call on the
importer to the exporter, but I'm pretty sure that the fs layer wouldn't
like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.

Oh, that most likely won't work correctly with unpinned DMA-bufs and
needs to be avoided.

Each file descriptor is associated with an struct address_space. And
when you mmap() through the importer by redirecting the system call to
the exporter you end up with the wrong struct address_space in your VMA.

That in turn can go up easily in flames when the exporter tries to
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?

Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and
most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove the vma->vm_pgoff offset, which means
unmap_mapping_range wont work anyway. I guess for drivers which use
shmem helpers the hard assumption is that a) can't use p2p dma and b)
pin everything into system memory.

So not a problem. But something to keep in mind. I'll try to do a
kerneldoc patch to note this somewhere. btw on that, did the
timeline/syncobj documentation patch land by now? Just trying to make
sure that doesn't get lost for another few months or so :-/

Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
which uses un

Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-09 Thread Christian König

Am 08.07.20 um 18:11 schrieb Daniel Vetter:

On Wed, Jul 8, 2020 at 5:05 PM Christian König  wrote:

Am 08.07.20 um 17:01 schrieb Daniel Vetter:

On Wed, Jul 8, 2020 at 4:37 PM Christian König  wrote:

Am 08.07.20 um 11:54 schrieb Daniel Vetter:

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:

Am 07.07.20 um 20:35 schrieb Chris Wilson:

Quoting lepton (2020-07-07 19:17:51)

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:

Quoting lepton (2020-07-07 18:05:21)

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.

Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.

Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
is illegal.

What we could maybe try to do is to redirect this mmap() API call on the
importer to the exporter, but I'm pretty sure that the fs layer wouldn't
like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.

Oh, that most likely won't work correctly with unpinned DMA-bufs and
needs to be avoided.

Each file descriptor is associated with an struct address_space. And
when you mmap() through the importer by redirecting the system call to
the exporter you end up with the wrong struct address_space in your VMA.

That in turn can go up easily in flames when the exporter tries to
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?

Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and
most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove the vma->vm_pgoff offset, which means
unmap_mapping_range wont work anyway. I guess for drivers which use
shmem helpers the hard assumption is that a) can't use p2p dma and b)
pin everything into system memory.

So not a problem. But something to keep in mind. I'll try to do a
kerneldoc patch to note this somewhere. btw on that, did the
timeline/syncobj documentation patch land by now? Just trying to make
sure that doesn't get lost for another few months or so :-/


I still haven't found time to double check the documentation and most 
likely won't in quite a while.


Sorry for that, but yeah you know

Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-09 Thread Steven Price

On 09/07/2020 09:48, Christian König wrote:

Am 08.07.20 um 18:19 schrieb Daniel Vetter:

On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter  wrote:
On Wed, Jul 8, 2020 at 5:05 PM Christian König 
 wrote:

Am 08.07.20 um 17:01 schrieb Daniel Vetter:
On Wed, Jul 8, 2020 at 4:37 PM Christian König 
 wrote:

Am 08.07.20 um 11:54 schrieb Daniel Vetter:

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:

Am 07.07.20 um 20:35 schrieb Chris Wilson:

Quoting lepton (2020-07-07 19:17:51)
On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
 wrote:

Quoting lepton (2020-07-07 18:05:21)
On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
 wrote:
If we assign obj->filp, we believe that the create vgem bo 
is native and
allow direct operations like mmap() assuming it behaves as 
backed by a

shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store 
misleading.


Note, that regular mmap access to a vgem bo is via the dumb 
buffer API,

and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here?  It looks 
like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it 
doesn't call

drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the 
dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had 
to look at

igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on imported 
dma buf
for vgem. I am not really a graphic guy, but then the api 
looks like:
for a gem handle, user space has to guess to find out the way 
to mmap
it. If user space guess wrong, then it will fail to mmap. Is 
this the

expected way
for people to handle gpu buffer?
You either have a dumb buffer handle, or a dma-buf fd. If you 
have the
handle, you have to use the dumb buffer API, there's no other 
way to
mmap it. If you have the dma-buf fd, you should mmap it 
directly. Those

two are clear.

It's when you import the dma-buf into vgem and create a handle 
out of
it, that's when the handle is no longer first class and certain 
uAPI

[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to 
remember the
difference between the handles, but at the same time it does 
prevent

them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the 
additional ioctls

that go along with that access.
Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of 
an importer

is illegal.

What we could maybe try to do is to redirect this mmap() API 
call on the
importer to the exporter, but I'm pretty sure that the fs layer 
wouldn't

like that without changes.
We already do that, there's a full helper-ified path from I think 
shmem
helpers through prime helpers to forward this all. Including 
handling

buffer offsets and all the other lolz back&forth.

Oh, that most likely won't work correctly with unpinned DMA-bufs and
needs to be avoided.

Each file descriptor is associated with an struct address_space. And
when you mmap() through the importer by redirecting the system 
call to
the exporter you end up with the wrong struct address_space in 
your VMA.


That in turn can go up easily in flames when the exporter tries to
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?

Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and
most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove the vma->vm_pgoff offset, which means
unmap_mapping_range wont work anyway. I guess for drivers which use
shmem helpers the hard assumption is that a) can't use p2p dma and b)
pin everything into system memory.

So not a problem. But something to keep in mind. I'll try to do a
kerneldoc patch to note this somewhere. btw on that, did the
timeline/syncobj documentation patch land by now? Just trying to make
sure that doesn't get lost for another few months or so :-/

Ok, 

Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-09 Thread Christian König

Am 09.07.20 um 15:54 schrieb Steven Price:

On 09/07/2020 09:48, Christian König wrote:

Am 08.07.20 um 18:19 schrieb Daniel Vetter:

On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter  wrote:
On Wed, Jul 8, 2020 at 5:05 PM Christian König 
 wrote:

Am 08.07.20 um 17:01 schrieb Daniel Vetter:
On Wed, Jul 8, 2020 at 4:37 PM Christian König 
 wrote:

Am 08.07.20 um 11:54 schrieb Daniel Vetter:

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:

Am 07.07.20 um 20:35 schrieb Chris Wilson:

Quoting lepton (2020-07-07 19:17:51)
On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
 wrote:

Quoting lepton (2020-07-07 18:05:21)
On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
 wrote:
If we assign obj->filp, we believe that the create vgem 
bo is native and
allow direct operations like mmap() assuming it behaves 
as backed by a
shmemfs inode. When imported from a dmabuf, the 
obj->pages are
not always meaningful and the shmemfs backing store 
misleading.


Note, that regular mmap access to a vgem bo is via the 
dumb buffer API,

and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here?  It looks 
like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then 
it doesn't call

drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via 
the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had 
to look at

igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on 
imported dma buf
for vgem. I am not really a graphic guy, but then the api 
looks like:
for a gem handle, user space has to guess to find out the 
way to mmap
it. If user space guess wrong, then it will fail to mmap. Is 
this the

expected way
for people to handle gpu buffer?
You either have a dumb buffer handle, or a dma-buf fd. If you 
have the
handle, you have to use the dumb buffer API, there's no other 
way to
mmap it. If you have the dma-buf fd, you should mmap it 
directly. Those

two are clear.

It's when you import the dma-buf into vgem and create a 
handle out of
it, that's when the handle is no longer first class and 
certain uAPI

[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to 
remember the
difference between the handles, but at the same time it does 
prevent
them falling into coherency traps by forcing them to use the 
right
driver to handle the object, and have to consider the 
additional ioctls

that go along with that access.
Yes, Chris is right. Mapping DMA-buf through the mmap() APIs 
of an importer

is illegal.

What we could maybe try to do is to redirect this mmap() API 
call on the
importer to the exporter, but I'm pretty sure that the fs 
layer wouldn't

like that without changes.
We already do that, there's a full helper-ified path from I 
think shmem
helpers through prime helpers to forward this all. Including 
handling

buffer offsets and all the other lolz back&forth.
Oh, that most likely won't work correctly with unpinned DMA-bufs 
and

needs to be avoided.

Each file descriptor is associated with an struct address_space. 
And
when you mmap() through the importer by redirecting the system 
call to
the exporter you end up with the wrong struct address_space in 
your VMA.


That in turn can go up easily in flames when the exporter tries to
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection 
...

Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is 
instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't 
work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd 
mmap has

it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?

Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and
most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove the vma->vm_pgoff offset, which means
unmap_mapping_range wont work anyway. I guess for drivers which use
shmem helpers the hard assumption is that a) can't use p2p dma and b)
pin everything into system memory.

So not a problem. But something to keep in mind. I'll try to do a
kerneldoc patch to note this somewhere. btw on that, did the
timeline/syncobj documentation patch land by now? Just trying to make
sure that doe