Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-09 Thread Andrew Davis

On 1/9/23 5:47 AM, Jacek Lawrynowicz wrote:

Hi,

On 06.01.2023 14:29, Stanislaw Gruszka wrote:

Hi

On Thu, Jan 05, 2023 at 12:46:51PM -0600, Andrew Davis wrote:

On 12/8/22 5:07 AM, Jacek Lawrynowicz wrote:

Adds four types of GEM-based BOs for the VPU:
- shmem
- userptr


Do you have some specific need for userptr that would not
be covered by prime import + heaps? I'm just trying to get
a feel for the typical use-cases for these.


Honestly, I'm not sure. I think we have use-cases that justify
adding userptr, but have to check with our team members that
better understand the requirements.


It would be great if userptr could be replaced by dma-buf heaps.
I will add this to TODO and we will look into this after the driver is merged.



We should also be clear on the export capabilities up front
for these kinds of drivers. DRM allows re-exporting as DMA-BUF
no matter the allocation style/location which has caused issues.
Lets start accel framework with the rule that if you want a shareable
buffer you should allocate it from Heaps not the driver, then pass
it into the driver.

Merging as-is to save churn now would be fine, but it must be clear
that this is not a stable ABI just because it was allowed to be
merged. userptr/export will be removed later and should not be used
by the userspace driver.

Andrew


Regards,
Jacek


Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-09 Thread Jacek Lawrynowicz
Hi,

On 06.01.2023 19:25, Daniel Vetter wrote:
> On Fri, 6 Jan 2023 at 14:23, Stanislaw Gruszka
>  wrote:
>>
>> On Fri, Jan 06, 2023 at 11:50:05AM +0100, Daniel Vetter wrote:
>>> On Thu, Dec 08, 2022 at 12:07:29PM +0100, Jacek Lawrynowicz wrote:
 Adds four types of GEM-based BOs for the VPU:
   - shmem
   - userptr
   - internal
>>>
>>> Uh what do you need this for? Usually the way we do these is just alloce a
>>> normal bo, and then pin them.
>>
>> I think we do alloc/pin this way, but all our bo's are GEM based.
>> For those bo's we use internally and other non-shmem we create them
>> with drm_gem_private_object_init(). I think this way is simpler than
>> have separate code for non-GEM and GEM bo's ...
> 
> They should be all gem bo, I guess you mean shmem vs non-shmem? And
> the allocate+pin is the standard approach for drivers that have
> somewhat dynamic bo (i.e. not using dma_alloc) and need some of them
> (hopefully only for driver internal objects, not for userspace) pinned
> in place. So you handrolling a perma-pinned gem bo for internal
> objects is rather strange by drm driver standards.
> 
>>> Also, gem shmem helpers should be able to mostly cover you here, why not
>>> use those? Might need some work to push basic userptr to them, but we have
>>> enough drivers reinventing that wheel to justify that work.
>>>
>>> Can I guess also be done after merging.
>>
>> ... but if not, we can add this to TODO.
> 
> Yeah I'm fine with todo to cut these over to shmem helpers, this
> driver has been stuck in limbo for way too long anyway.

Yeah, I think it would be easier for everyone if this driver was merged.
Especially for me :)
I feel like I'm shifting tons of coal every time I need to update it.
But also from the reviewer perspective it would be easier to track changes
between driver revisions if only delta was posted instead of the whole 8K lines 
of code.

Guys, please make my day and merge it to 6.3.

Regards,
Jacek



Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-09 Thread Jacek Lawrynowicz
Hi,

On 06.01.2023 14:29, Stanislaw Gruszka wrote:
> Hi
> 
> On Thu, Jan 05, 2023 at 12:46:51PM -0600, Andrew Davis wrote:
>> On 12/8/22 5:07 AM, Jacek Lawrynowicz wrote:
>>> Adds four types of GEM-based BOs for the VPU:
>>>- shmem
>>>- userptr
>>
>> Do you have some specific need for userptr that would not
>> be covered by prime import + heaps? I'm just trying to get
>> a feel for the typical use-cases for these.
> 
> Honestly, I'm not sure. I think we have use-cases that justify
> adding userptr, but have to check with our team members that
> better understand the requirements.

It would be great if userptr could be replaced by dma-buf heaps.
I will add this to TODO and we will look into this after the driver is merged.

Regards,
Jacek


Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-06 Thread Daniel Vetter
On Fri, 6 Jan 2023 at 14:23, Stanislaw Gruszka
 wrote:
>
> On Fri, Jan 06, 2023 at 11:50:05AM +0100, Daniel Vetter wrote:
> > On Thu, Dec 08, 2022 at 12:07:29PM +0100, Jacek Lawrynowicz wrote:
> > > Adds four types of GEM-based BOs for the VPU:
> > >   - shmem
> > >   - userptr
> > >   - internal
> >
> > Uh what do you need this for? Usually the way we do these is just alloce a
> > normal bo, and then pin them.
>
> I think we do alloc/pin this way, but all our bo's are GEM based.
> For those bo's we use internally and other non-shmem we create them
> with drm_gem_private_object_init(). I think this way is simpler than
> have separate code for non-GEM and GEM bo's ...

They should be all gem bo, I guess you mean shmem vs non-shmem? And
the allocate+pin is the standard approach for drivers that have
somewhat dynamic bo (i.e. not using dma_alloc) and need some of them
(hopefully only for driver internal objects, not for userspace) pinned
in place. So you handrolling a perma-pinned gem bo for internal
objects is rather strange by drm driver standards.

> > Also, gem shmem helpers should be able to mostly cover you here, why not
> > use those? Might need some work to push basic userptr to them, but we have
> > enough drivers reinventing that wheel to justify that work.
> >
> > Can I guess also be done after merging.
>
> ... but if not, we can add this to TODO.

Yeah I'm fine with todo to cut these over to shmem helpers, this
driver has been stuck in limbo for way too long anyway.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-06 Thread Stanislaw Gruszka
Hi

On Thu, Jan 05, 2023 at 12:46:51PM -0600, Andrew Davis wrote:
> On 12/8/22 5:07 AM, Jacek Lawrynowicz wrote:
> > Adds four types of GEM-based BOs for the VPU:
> >- shmem
> >- userptr
> 
> Do you have some specific need for userptr that would not
> be covered by prime import + heaps? I'm just trying to get
> a feel for the typical use-cases for these.

Honestly, I'm not sure. I think we have use-cases that justify
adding userptr, but have to check with our team members that
better understand the requirements.

Regards
Stanislaw


Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-06 Thread Stanislaw Gruszka
On Fri, Jan 06, 2023 at 11:50:05AM +0100, Daniel Vetter wrote:
> On Thu, Dec 08, 2022 at 12:07:29PM +0100, Jacek Lawrynowicz wrote:
> > Adds four types of GEM-based BOs for the VPU:
> >   - shmem
> >   - userptr
> >   - internal
> 
> Uh what do you need this for? Usually the way we do these is just alloce a
> normal bo, and then pin them.

I think we do alloc/pin this way, but all our bo's are GEM based.
For those bo's we use internally and other non-shmem we create them
with drm_gem_private_object_init(). I think this way is simpler than
have separate code for non-GEM and GEM bo's ...

> Also, gem shmem helpers should be able to mostly cover you here, why not
> use those? Might need some work to push basic userptr to them, but we have
> enough drivers reinventing that wheel to justify that work.
> 
> Can I guess also be done after merging.

... but if not, we can add this to TODO.

Regards
Stanislaw


Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-06 Thread Daniel Vetter
On Thu, Dec 08, 2022 at 12:07:29PM +0100, Jacek Lawrynowicz wrote:
> Adds four types of GEM-based BOs for the VPU:
>   - shmem
>   - userptr
>   - internal

Uh what do you need this for? Usually the way we do these is just alloce a
normal bo, and then pin them.

Also, gem shmem helpers should be able to mostly cover you here, why not
use those? Might need some work to push basic userptr to them, but we have
enough drivers reinventing that wheel to justify that work.

Can I guess also be done after merging.
-Daniel

>   - prime
> 
> All types are implemented as struct ivpu_bo, based on
> struct drm_gem_object. VPU address is allocated when buffer is created
> except for imported prime buffers that allocate it in BO_INFO IOCTL due
> to missing file_priv arg in gem_prime_import callback.
> Internal buffers are pinned on creation, the rest of buffers types
> can be pinned on demand (in SUBMIT IOCTL).
> Buffer VPU address, allocated pages and mappings are relased when the
> buffer is destroyed.
> Eviction mechism is planned for future versions.
> 
> Add three new IOCTLs: BO_CREATE, BO_INFO, BO_USERPTR
> 
> Signed-off-by: Jacek Lawrynowicz 
> ---
>  drivers/accel/ivpu/Makefile   |   1 +
>  drivers/accel/ivpu/ivpu_drv.c |  31 +-
>  drivers/accel/ivpu/ivpu_drv.h |   1 +
>  drivers/accel/ivpu/ivpu_gem.c | 820 ++
>  drivers/accel/ivpu/ivpu_gem.h | 128 ++
>  include/uapi/drm/ivpu_drm.h   | 127 ++
>  6 files changed, 1106 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/accel/ivpu/ivpu_gem.c
>  create mode 100644 drivers/accel/ivpu/ivpu_gem.h
> 
> diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile
> index 37b8bf1d3247..1b4b24ebf5ea 100644
> --- a/drivers/accel/ivpu/Makefile
> +++ b/drivers/accel/ivpu/Makefile
> @@ -3,6 +3,7 @@
>  
>  intel_vpu-y := \
>   ivpu_drv.o \
> + ivpu_gem.o \
>   ivpu_hw_mtl.o \
>   ivpu_mmu.o \
>   ivpu_mmu_context.o
> diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
> index a22d41ca5a4b..29e78c5ec7c5 100644
> --- a/drivers/accel/ivpu/ivpu_drv.c
> +++ b/drivers/accel/ivpu/ivpu_drv.c
> @@ -12,8 +12,10 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "ivpu_drv.h"
> +#include "ivpu_gem.h"
>  #include "ivpu_hw.h"
>  #include "ivpu_mmu.h"
>  #include "ivpu_mmu_context.h"
> @@ -49,6 +51,24 @@ struct ivpu_file_priv *ivpu_file_priv_get(struct 
> ivpu_file_priv *file_priv)
>   return file_priv;
>  }
>  
> +struct ivpu_file_priv *ivpu_file_priv_get_by_ctx_id(struct ivpu_device 
> *vdev, unsigned long id)
> +{
> + struct ivpu_file_priv *file_priv;
> +
> + xa_lock_irq(>context_xa);
> + file_priv = xa_load(>context_xa, id);
> + /* file_priv may still be in context_xa during file_priv_release() */
> + if (file_priv && !kref_get_unless_zero(_priv->ref))
> + file_priv = NULL;
> + xa_unlock_irq(>context_xa);
> +
> + if (file_priv)
> + ivpu_dbg(vdev, KREF, "file_priv get by id: ctx %u refcount 
> %u\n",
> +  file_priv->ctx.id, kref_read(_priv->ref));
> +
> + return file_priv;
> +}
> +
>  static void file_priv_release(struct kref *ref)
>  {
>   struct ivpu_file_priv *file_priv = container_of(ref, struct 
> ivpu_file_priv, ref);
> @@ -57,7 +77,7 @@ static void file_priv_release(struct kref *ref)
>   ivpu_dbg(vdev, FILE, "file_priv release: ctx %u\n", file_priv->ctx.id);
>  
>   ivpu_mmu_user_context_fini(vdev, _priv->ctx);
> - WARN_ON(xa_erase_irq(>context_xa, file_priv->ctx.id) != 
> file_priv);
> + drm_WARN_ON(>drm, xa_erase_irq(>context_xa, 
> file_priv->ctx.id) != file_priv);
>   kfree(file_priv);
>  }
>  
> @@ -66,7 +86,7 @@ void ivpu_file_priv_put(struct ivpu_file_priv **link)
>   struct ivpu_file_priv *file_priv = *link;
>   struct ivpu_device *vdev = file_priv->vdev;
>  
> - WARN_ON(!file_priv);
> + drm_WARN_ON(>drm, !file_priv);
>  
>   ivpu_dbg(vdev, KREF, "file_priv put: ctx %u refcount %u\n",
>file_priv->ctx.id, kref_read(_priv->ref));
> @@ -200,6 +220,9 @@ static void ivpu_postclose(struct drm_device *dev, struct 
> drm_file *file)
>  static const struct drm_ioctl_desc ivpu_drm_ioctls[] = {
>   DRM_IOCTL_DEF_DRV(IVPU_GET_PARAM, ivpu_get_param_ioctl, 0),
>   DRM_IOCTL_DEF_DRV(IVPU_SET_PARAM, ivpu_set_param_ioctl, 0),
> + DRM_IOCTL_DEF_DRV(IVPU_BO_CREATE, ivpu_bo_create_ioctl, 0),
> + DRM_IOCTL_DEF_DRV(IVPU_BO_INFO, ivpu_bo_info_ioctl, 0),
> + DRM_IOCTL_DEF_DRV(IVPU_BO_USERPTR, ivpu_bo_userptr_ioctl, 0),
>  };
>  
>  int ivpu_shutdown(struct ivpu_device *vdev)
> @@ -233,6 +256,10 @@ static const struct drm_driver driver = {
>  
>   .open = ivpu_open,
>   .postclose = ivpu_postclose,
> + .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> + .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> + .gem_prime_import = ivpu_gem_prime_import,
> + .gem_prime_mmap = drm_gem_prime_mmap,
>  
>   

Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2023-01-05 Thread Andrew Davis

On 12/8/22 5:07 AM, Jacek Lawrynowicz wrote:

Adds four types of GEM-based BOs for the VPU:
   - shmem
   - userptr


Do you have some specific need for userptr that would not
be covered by prime import + heaps? I'm just trying to get
a feel for the typical use-cases for these.

Andrew


   - internal
   - prime

All types are implemented as struct ivpu_bo, based on
struct drm_gem_object. VPU address is allocated when buffer is created
except for imported prime buffers that allocate it in BO_INFO IOCTL due
to missing file_priv arg in gem_prime_import callback.
Internal buffers are pinned on creation, the rest of buffers types
can be pinned on demand (in SUBMIT IOCTL).
Buffer VPU address, allocated pages and mappings are relased when the
buffer is destroyed.
Eviction mechism is planned for future versions.

Add three new IOCTLs: BO_CREATE, BO_INFO, BO_USERPTR

Signed-off-by: Jacek Lawrynowicz 


Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2022-12-19 Thread Jacek Lawrynowicz
Hi,

On 18.12.2022 11:23, Oded Gabbay wrote:
> On Thu, Dec 8, 2022 at 1:08 PM Jacek Lawrynowicz
>  wrote:
>> Adds four types of GEM-based BOs for the VPU:
>>   - shmem
>>   - userptr
>>   - internal
>>   - prime
>>
>> All types are implemented as struct ivpu_bo, based on
>> struct drm_gem_object. VPU address is allocated when buffer is created
>> except for imported prime buffers that allocate it in BO_INFO IOCTL due
>> to missing file_priv arg in gem_prime_import callback.
>> Internal buffers are pinned on creation, the rest of buffers types
>> can be pinned on demand (in SUBMIT IOCTL).
>> Buffer VPU address, allocated pages and mappings are relased when the
>> buffer is destroyed.
>> Eviction mechism is planned for future versions.
>>
>> Add three new IOCTLs: BO_CREATE, BO_INFO, BO_USERPTR
>>
>> Signed-off-by: Jacek Lawrynowicz 
>> ---
>>  drivers/accel/ivpu/Makefile   |   1 +
>>  drivers/accel/ivpu/ivpu_drv.c |  31 +-
>>  drivers/accel/ivpu/ivpu_drv.h |   1 +
>>  drivers/accel/ivpu/ivpu_gem.c | 820 ++
>>  drivers/accel/ivpu/ivpu_gem.h | 128 ++
>>  include/uapi/drm/ivpu_drm.h   | 127 ++
>>  6 files changed, 1106 insertions(+), 2 deletions(-)
>>  create mode 100644 drivers/accel/ivpu/ivpu_gem.c
>>  create mode 100644 drivers/accel/ivpu/ivpu_gem.h
>>
>> diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile
>> index 37b8bf1d3247..1b4b24ebf5ea 100644
>> --- a/drivers/accel/ivpu/Makefile
>> +++ b/drivers/accel/ivpu/Makefile
>> @@ -3,6 +3,7 @@
>>
>>  intel_vpu-y := \
>> ivpu_drv.o \
>> +   ivpu_gem.o \
>> ivpu_hw_mtl.o \
>> ivpu_mmu.o \
>> ivpu_mmu_context.o
>> diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
>> index a22d41ca5a4b..29e78c5ec7c5 100644
>> --- a/drivers/accel/ivpu/ivpu_drv.c
>> +++ b/drivers/accel/ivpu/ivpu_drv.c
>> @@ -12,8 +12,10 @@
>>  #include 
>>  #include 
>>  #include 
>> +#include 
>>
>>  #include "ivpu_drv.h"
>> +#include "ivpu_gem.h"
>>  #include "ivpu_hw.h"
>>  #include "ivpu_mmu.h"
>>  #include "ivpu_mmu_context.h"
>> @@ -49,6 +51,24 @@ struct ivpu_file_priv *ivpu_file_priv_get(struct 
>> ivpu_file_priv *file_priv)
>> return file_priv;
>>  }
>>
>> +struct ivpu_file_priv *ivpu_file_priv_get_by_ctx_id(struct ivpu_device 
>> *vdev, unsigned long id)
>> +{
>> +   struct ivpu_file_priv *file_priv;
>> +
>> +   xa_lock_irq(>context_xa);
>> +   file_priv = xa_load(>context_xa, id);
>> +   /* file_priv may still be in context_xa during file_priv_release() */
>> +   if (file_priv && !kref_get_unless_zero(_priv->ref))
>> +   file_priv = NULL;
>> +   xa_unlock_irq(>context_xa);
>> +
>> +   if (file_priv)
>> +   ivpu_dbg(vdev, KREF, "file_priv get by id: ctx %u refcount 
>> %u\n",
>> +file_priv->ctx.id, kref_read(_priv->ref));
>> +
>> +   return file_priv;
>> +}
>> +
>>  static void file_priv_release(struct kref *ref)
>>  {
>> struct ivpu_file_priv *file_priv = container_of(ref, struct 
>> ivpu_file_priv, ref);
>> @@ -57,7 +77,7 @@ static void file_priv_release(struct kref *ref)
>> ivpu_dbg(vdev, FILE, "file_priv release: ctx %u\n", 
>> file_priv->ctx.id);
>>
>> ivpu_mmu_user_context_fini(vdev, _priv->ctx);
>> -   WARN_ON(xa_erase_irq(>context_xa, file_priv->ctx.id) != 
>> file_priv);
>> +   drm_WARN_ON(>drm, xa_erase_irq(>context_xa, 
>> file_priv->ctx.id) != file_priv);
>> kfree(file_priv);
>>  }
>>
>> @@ -66,7 +86,7 @@ void ivpu_file_priv_put(struct ivpu_file_priv **link)
>> struct ivpu_file_priv *file_priv = *link;
>> struct ivpu_device *vdev = file_priv->vdev;
>>
>> -   WARN_ON(!file_priv);
>> +   drm_WARN_ON(>drm, !file_priv);
>>
>> ivpu_dbg(vdev, KREF, "file_priv put: ctx %u refcount %u\n",
>>  file_priv->ctx.id, kref_read(_priv->ref));
>> @@ -200,6 +220,9 @@ static void ivpu_postclose(struct drm_device *dev, 
>> struct drm_file *file)
>>  static const struct drm_ioctl_desc ivpu_drm_ioctls[] = {
>> DRM_IOCTL_DEF_DRV(IVPU_GET_PARAM, ivpu_get_param_ioctl, 0),
>> DRM_IOCTL_DEF_DRV(IVPU_SET_PARAM, ivpu_set_param_ioctl, 0),
>> +   DRM_IOCTL_DEF_DRV(IVPU_BO_CREATE, ivpu_bo_create_ioctl, 0),
>> +   DRM_IOCTL_DEF_DRV(IVPU_BO_INFO, ivpu_bo_info_ioctl, 0),
>> +   DRM_IOCTL_DEF_DRV(IVPU_BO_USERPTR, ivpu_bo_userptr_ioctl, 0),
>>  };
>>
>>  int ivpu_shutdown(struct ivpu_device *vdev)
>> @@ -233,6 +256,10 @@ static const struct drm_driver driver = {
>>
>> .open = ivpu_open,
>> .postclose = ivpu_postclose,
>> +   .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>> +   .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>> +   .gem_prime_import = ivpu_gem_prime_import,
>> +   .gem_prime_mmap = drm_gem_prime_mmap,
>>
>> .ioctls = ivpu_drm_ioctls,
>> .num_ioctls = ARRAY_SIZE(ivpu_drm_ioctls),
>> diff --git a/drivers/accel/ivpu/ivpu_drv.h 

Re: [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management

2022-12-18 Thread Oded Gabbay
On Thu, Dec 8, 2022 at 1:08 PM Jacek Lawrynowicz
 wrote:
>
> Adds four types of GEM-based BOs for the VPU:
>   - shmem
>   - userptr
>   - internal
>   - prime
>
> All types are implemented as struct ivpu_bo, based on
> struct drm_gem_object. VPU address is allocated when buffer is created
> except for imported prime buffers that allocate it in BO_INFO IOCTL due
> to missing file_priv arg in gem_prime_import callback.
> Internal buffers are pinned on creation, the rest of buffers types
> can be pinned on demand (in SUBMIT IOCTL).
> Buffer VPU address, allocated pages and mappings are relased when the
> buffer is destroyed.
> Eviction mechism is planned for future versions.
>
> Add three new IOCTLs: BO_CREATE, BO_INFO, BO_USERPTR
>
> Signed-off-by: Jacek Lawrynowicz 
> ---
>  drivers/accel/ivpu/Makefile   |   1 +
>  drivers/accel/ivpu/ivpu_drv.c |  31 +-
>  drivers/accel/ivpu/ivpu_drv.h |   1 +
>  drivers/accel/ivpu/ivpu_gem.c | 820 ++
>  drivers/accel/ivpu/ivpu_gem.h | 128 ++
>  include/uapi/drm/ivpu_drm.h   | 127 ++
>  6 files changed, 1106 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/accel/ivpu/ivpu_gem.c
>  create mode 100644 drivers/accel/ivpu/ivpu_gem.h
>
> diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile
> index 37b8bf1d3247..1b4b24ebf5ea 100644
> --- a/drivers/accel/ivpu/Makefile
> +++ b/drivers/accel/ivpu/Makefile
> @@ -3,6 +3,7 @@
>
>  intel_vpu-y := \
> ivpu_drv.o \
> +   ivpu_gem.o \
> ivpu_hw_mtl.o \
> ivpu_mmu.o \
> ivpu_mmu_context.o
> diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
> index a22d41ca5a4b..29e78c5ec7c5 100644
> --- a/drivers/accel/ivpu/ivpu_drv.c
> +++ b/drivers/accel/ivpu/ivpu_drv.c
> @@ -12,8 +12,10 @@
>  #include 
>  #include 
>  #include 
> +#include 
>
>  #include "ivpu_drv.h"
> +#include "ivpu_gem.h"
>  #include "ivpu_hw.h"
>  #include "ivpu_mmu.h"
>  #include "ivpu_mmu_context.h"
> @@ -49,6 +51,24 @@ struct ivpu_file_priv *ivpu_file_priv_get(struct 
> ivpu_file_priv *file_priv)
> return file_priv;
>  }
>
> +struct ivpu_file_priv *ivpu_file_priv_get_by_ctx_id(struct ivpu_device 
> *vdev, unsigned long id)
> +{
> +   struct ivpu_file_priv *file_priv;
> +
> +   xa_lock_irq(>context_xa);
> +   file_priv = xa_load(>context_xa, id);
> +   /* file_priv may still be in context_xa during file_priv_release() */
> +   if (file_priv && !kref_get_unless_zero(_priv->ref))
> +   file_priv = NULL;
> +   xa_unlock_irq(>context_xa);
> +
> +   if (file_priv)
> +   ivpu_dbg(vdev, KREF, "file_priv get by id: ctx %u refcount 
> %u\n",
> +file_priv->ctx.id, kref_read(_priv->ref));
> +
> +   return file_priv;
> +}
> +
>  static void file_priv_release(struct kref *ref)
>  {
> struct ivpu_file_priv *file_priv = container_of(ref, struct 
> ivpu_file_priv, ref);
> @@ -57,7 +77,7 @@ static void file_priv_release(struct kref *ref)
> ivpu_dbg(vdev, FILE, "file_priv release: ctx %u\n", 
> file_priv->ctx.id);
>
> ivpu_mmu_user_context_fini(vdev, _priv->ctx);
> -   WARN_ON(xa_erase_irq(>context_xa, file_priv->ctx.id) != 
> file_priv);
> +   drm_WARN_ON(>drm, xa_erase_irq(>context_xa, 
> file_priv->ctx.id) != file_priv);
> kfree(file_priv);
>  }
>
> @@ -66,7 +86,7 @@ void ivpu_file_priv_put(struct ivpu_file_priv **link)
> struct ivpu_file_priv *file_priv = *link;
> struct ivpu_device *vdev = file_priv->vdev;
>
> -   WARN_ON(!file_priv);
> +   drm_WARN_ON(>drm, !file_priv);
>
> ivpu_dbg(vdev, KREF, "file_priv put: ctx %u refcount %u\n",
>  file_priv->ctx.id, kref_read(_priv->ref));
> @@ -200,6 +220,9 @@ static void ivpu_postclose(struct drm_device *dev, struct 
> drm_file *file)
>  static const struct drm_ioctl_desc ivpu_drm_ioctls[] = {
> DRM_IOCTL_DEF_DRV(IVPU_GET_PARAM, ivpu_get_param_ioctl, 0),
> DRM_IOCTL_DEF_DRV(IVPU_SET_PARAM, ivpu_set_param_ioctl, 0),
> +   DRM_IOCTL_DEF_DRV(IVPU_BO_CREATE, ivpu_bo_create_ioctl, 0),
> +   DRM_IOCTL_DEF_DRV(IVPU_BO_INFO, ivpu_bo_info_ioctl, 0),
> +   DRM_IOCTL_DEF_DRV(IVPU_BO_USERPTR, ivpu_bo_userptr_ioctl, 0),
>  };
>
>  int ivpu_shutdown(struct ivpu_device *vdev)
> @@ -233,6 +256,10 @@ static const struct drm_driver driver = {
>
> .open = ivpu_open,
> .postclose = ivpu_postclose,
> +   .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> +   .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> +   .gem_prime_import = ivpu_gem_prime_import,
> +   .gem_prime_mmap = drm_gem_prime_mmap,
>
> .ioctls = ivpu_drm_ioctls,
> .num_ioctls = ARRAY_SIZE(ivpu_drm_ioctls),
> diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
> index 6e8b88068fc9..69088a03928a 100644
> --- a/drivers/accel/ivpu/ivpu_drv.h
> +++ b/drivers/accel/ivpu/ivpu_drv.h
> @@ -114,6 +114,7 @@ extern u8