Re: [Nouveau] [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation

2023-08-30 Thread Danilo Krummrich
On Wed, Aug 30, 2023 at 03:42:08PM +0200, Thomas Hellström (Intel) wrote:
> 
> On 8/30/23 14:49, Danilo Krummrich wrote:
> > Hi Thomas,
> > 
> > thanks for having a look!
> > 
> > On Wed, Aug 30, 2023 at 09:27:45AM +0200, Thomas Hellström (Intel) wrote:
> > > Hi, Danilo.
> > > 
> > > Some quick comments since I'm doing some Xe work in this area. Will 
> > > probably
> > > get back with more.
> > > 
> > > On 8/20/23 23:53, Danilo Krummrich wrote:
> > > > So far the DRM GPUVA manager offers common infrastructure to track GPU 
> > > > VA
> > > > allocations and mappings, generically connect GPU VA mappings to their
> > > > backing buffers and perform more complex mapping operations on the GPU 
> > > > VA
> > > > space.
> > > > 
> > > > However, there are more design patterns commonly used by drivers, which
> > > > can potentially be generalized in order to make the DRM GPUVA manager
> > > > represent a basic GPU-VM implementation. In this context, this patch 
> > > > aims
> > > > at generalizing the following elements.
> > > > 
> > > > 1) Provide a common dma-resv for GEM objects not being used outside of
> > > >  this GPU-VM.
> > > > 
> > > > 2) Provide tracking of external GEM objects (GEM objects which are
> > > >  shared with other GPU-VMs).
> > > > 
> > > > 3) Provide functions to efficiently lock all GEM objects dma-resv the
> > > >  GPU-VM contains mappings of.
> > > > 
> > > > 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
> > > >  of, such that validation of evicted GEM objects is accelerated.
> > > > 
> > > > 5) Provide some convinience functions for common patterns.
> > > > 
> > > > Rather than being designed as a "framework", the target is to make all
> > > > features appear as a collection of optional helper functions, such that
> > > > drivers are free to make use of the DRM GPUVA managers basic
> > > > functionality and opt-in for other features without setting any feature
> > > > flags, just by making use of the corresponding functions.
> > > > 
> > > > Signed-off-by: Danilo Krummrich 
> > > > ---
> > > >drivers/gpu/drm/drm_gpuva_mgr.c | 688 
> > > > +++-
> > > >include/drm/drm_gem.h   |  48 ++-
> > > >include/drm/drm_gpuva_mgr.h | 302 +-
> > > >3 files changed, 1010 insertions(+), 28 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c 
> > > > b/drivers/gpu/drm/drm_gpuva_mgr.c
> > > > index f86bfad74ff8..69872b205961 100644
> > > > --- a/drivers/gpu/drm/drm_gpuva_mgr.c
> > > > +++ b/drivers/gpu/drm/drm_gpuva_mgr.c
> > > > @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> > > >/**
> > > > * drm_gpuva_manager_init() - initialize a _gpuva_manager
> > > > * @mgr: pointer to the _gpuva_manager to initialize
> > > > + * @drm: the drivers _device
> > > > * @name: the name of the GPU VA space
> > > > * @start_offset: the start offset of the GPU VA space
> > > > * @range: the size of the GPU VA space
> > > > @@ -669,6 +670,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> > > > */
> > > >void
> > > >drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> > > > +  struct drm_device *drm,
> > > >const char *name,
> > > >u64 start_offset, u64 range,
> > > >u64 reserve_offset, u64 reserve_range,
> > > > @@ -677,6 +679,11 @@ drm_gpuva_manager_init(struct drm_gpuva_manager 
> > > > *mgr,
> > > > mgr->rb.tree = RB_ROOT_CACHED;
> > > > INIT_LIST_HEAD(>rb.list);
> > > > +   mt_init(>mt_ext);
> > > > +
> > > > +   INIT_LIST_HEAD(>evict.list);
> > > > +   spin_lock_init(>evict.lock);
> > > > +
> > > > drm_gpuva_check_overflow(start_offset, range);
> > > > mgr->mm_start = start_offset;
> > > > mgr->mm_range = range;
> > > > @@ -694,6 +701,9 @@ drm_gpuva_manager_init(struct drm_gpuva_manager 
> > > > *mgr,
> > > >  reserve_range)))
> > > > __drm_gpuva_insert(mgr, 
> > > > >kernel_alloc_node);
> > > > }
> > > > +
> > > > +   drm_gem_private_object_init(drm, >d_obj, 0);
> > > > +   mgr->resv = mgr->d_obj.resv;
> > > >}
> > > >EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
> > > > @@ -713,10 +723,575 @@ drm_gpuva_manager_destroy(struct 
> > > > drm_gpuva_manager *mgr)
> > > > __drm_gpuva_remove(>kernel_alloc_node);
> > > > WARN(!RB_EMPTY_ROOT(>rb.tree.rb_root),
> > > > -"GPUVA tree is not empty, potentially leaking memory.");
> > > > +"GPUVA tree is not empty, potentially leaking memory.\n");
> > > > +
> > > > +   mtree_destroy(>mt_ext);
> > > > +   WARN(!list_empty(>evict.list), "Evict list should be 
> > > > empty.\n");
> > > > +
> > > > +   drm_gem_private_object_fini(>d_obj);
> > > >}
> > > >

Re: [Nouveau] [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation

2023-08-30 Thread Intel



On 8/30/23 14:49, Danilo Krummrich wrote:

Hi Thomas,

thanks for having a look!

On Wed, Aug 30, 2023 at 09:27:45AM +0200, Thomas Hellström (Intel) wrote:

Hi, Danilo.

Some quick comments since I'm doing some Xe work in this area. Will probably
get back with more.

On 8/20/23 23:53, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.

However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used outside of
 this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which are
 shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv the
 GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
 of, such that validation of evicted GEM objects is accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.

Signed-off-by: Danilo Krummrich 
---
   drivers/gpu/drm/drm_gpuva_mgr.c | 688 +++-
   include/drm/drm_gem.h   |  48 ++-
   include/drm/drm_gpuva_mgr.h | 302 +-
   3 files changed, 1010 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c
index f86bfad74ff8..69872b205961 100644
--- a/drivers/gpu/drm/drm_gpuva_mgr.c
+++ b/drivers/gpu/drm/drm_gpuva_mgr.c
@@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
   /**
* drm_gpuva_manager_init() - initialize a _gpuva_manager
* @mgr: pointer to the _gpuva_manager to initialize
+ * @drm: the drivers _device
* @name: the name of the GPU VA space
* @start_offset: the start offset of the GPU VA space
* @range: the size of the GPU VA space
@@ -669,6 +670,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
*/
   void
   drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
+  struct drm_device *drm,
   const char *name,
   u64 start_offset, u64 range,
   u64 reserve_offset, u64 reserve_range,
@@ -677,6 +679,11 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
mgr->rb.tree = RB_ROOT_CACHED;
INIT_LIST_HEAD(>rb.list);
+   mt_init(>mt_ext);
+
+   INIT_LIST_HEAD(>evict.list);
+   spin_lock_init(>evict.lock);
+
drm_gpuva_check_overflow(start_offset, range);
mgr->mm_start = start_offset;
mgr->mm_range = range;
@@ -694,6 +701,9 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
 reserve_range)))
__drm_gpuva_insert(mgr, >kernel_alloc_node);
}
+
+   drm_gem_private_object_init(drm, >d_obj, 0);
+   mgr->resv = mgr->d_obj.resv;
   }
   EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
@@ -713,10 +723,575 @@ drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr)
__drm_gpuva_remove(>kernel_alloc_node);
WARN(!RB_EMPTY_ROOT(>rb.tree.rb_root),
-"GPUVA tree is not empty, potentially leaking memory.");
+"GPUVA tree is not empty, potentially leaking memory.\n");
+
+   mtree_destroy(>mt_ext);
+   WARN(!list_empty(>evict.list), "Evict list should be empty.\n");
+
+   drm_gem_private_object_fini(>d_obj);
   }
   EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy);
+/**
+ * drm_gpuva_manager_prepare_objects() - prepare all assoiciated BOs
+ * @mgr: the _gpuva_manager
+ * @num_fences: the amount of _fences to reserve
+ *
+ * Calls drm_exec_prepare_obj() for all _gem_objects the given
+ * _gpuva_manager contains mappings of.
+ *
+ * Drivers can obtain the corresponding _exec instance through
+ * DRM_GPUVA_EXEC(). It is the drivers responsibility to call drm_exec_init()
+ * and drm_exec_fini() accordingly.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuva_manager_prepare_objects(struct drm_gpuva_manager *mgr,
+ unsigned int num_fences)
+{
+   struct drm_exec *exec = DRM_GPUVA_EXEC(mgr);
+   MA_STATE(mas, >mt_ext, 0, 0);
+   union {
+   void *ptr;
+   uintptr_t cnt;
+   } ref;
+   int ret;
+
+   ret = drm_exec_prepare_obj(exec, >d_obj, num_fences);
+  

Re: [Nouveau] [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation

2023-08-30 Thread Danilo Krummrich
On Wed, Aug 30, 2023 at 09:48:02AM +0200, Christian König wrote:
> 
> 
> Am 20.08.23 um 23:53 schrieb Danilo Krummrich:
> > So far the DRM GPUVA manager offers common infrastructure to track GPU VA
> > allocations and mappings, generically connect GPU VA mappings to their
> > backing buffers and perform more complex mapping operations on the GPU VA
> > space.
> > 
> > However, there are more design patterns commonly used by drivers, which
> > can potentially be generalized in order to make the DRM GPUVA manager
> > represent a basic GPU-VM implementation. In this context, this patch aims
> > at generalizing the following elements.
> > 
> > 1) Provide a common dma-resv for GEM objects not being used outside of
> > this GPU-VM.
> > 
> > 2) Provide tracking of external GEM objects (GEM objects which are
> > shared with other GPU-VMs).
> > 
> > 3) Provide functions to efficiently lock all GEM objects dma-resv the
> > GPU-VM contains mappings of.
> > 
> > 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
> > of, such that validation of evicted GEM objects is accelerated.
> > 
> > 5) Provide some convinience functions for common patterns.
> 
> Interesting work.
> 
> You basically implement a bunch of the ideas I came up to improve the amdgpu
> performance in the common manager now. The was one of the remaining blockers
> I had for using this in amdgpu.
> 
> Question is for example how do you track evictions? E.g. we don't have a
> common concept of eviction in GEM as far as I know. Or is the driver
> responsible for giving those notifications to the GPUVA manager?

Right, it is the driver being responsible to adding a drm_gpuva_gem (or VM_BO)
to the managers evict list.

The idea was that drivers have control about the state of a drm_gpuva_gem, such
that a driver can move it to driver specific lists as well, like all the ones
you have in amdgpu.

> 
> And would it be possible to lock only a specific area of the VM, e.g. every
> BO mapped in the interval X..Y?

Currently, the drm_gpuva_manager_lock() functions always lock the GPU-VMs
dma-resv lock, plus all the dma-resv locks of the external objects the manager
keeps track of.

But surely, we could also add something like drm_gpuva_manager_lock_range()
where we just iterate all drm_gpuvas between X and Y and lock the dma-resv
locks of each drm_gpuva's backing BO.

> 
> Regards,
> Christian.
> 
> > 
> > Rather than being designed as a "framework", the target is to make all
> > features appear as a collection of optional helper functions, such that
> > drivers are free to make use of the DRM GPUVA managers basic
> > functionality and opt-in for other features without setting any feature
> > flags, just by making use of the corresponding functions.
> > 
> > Signed-off-by: Danilo Krummrich 
> > ---
> >   drivers/gpu/drm/drm_gpuva_mgr.c | 688 +++-
> >   include/drm/drm_gem.h   |  48 ++-
> >   include/drm/drm_gpuva_mgr.h | 302 +-
> >   3 files changed, 1010 insertions(+), 28 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c 
> > b/drivers/gpu/drm/drm_gpuva_mgr.c
> > index f86bfad74ff8..69872b205961 100644
> > --- a/drivers/gpu/drm/drm_gpuva_mgr.c
> > +++ b/drivers/gpu/drm/drm_gpuva_mgr.c
> > @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> >   /**
> >* drm_gpuva_manager_init() - initialize a _gpuva_manager
> >* @mgr: pointer to the _gpuva_manager to initialize
> > + * @drm: the drivers _device
> >* @name: the name of the GPU VA space
> >* @start_offset: the start offset of the GPU VA space
> >* @range: the size of the GPU VA space
> > @@ -669,6 +670,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> >*/
> >   void
> >   drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> > +  struct drm_device *drm,
> >const char *name,
> >u64 start_offset, u64 range,
> >u64 reserve_offset, u64 reserve_range,
> > @@ -677,6 +679,11 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> > mgr->rb.tree = RB_ROOT_CACHED;
> > INIT_LIST_HEAD(>rb.list);
> > +   mt_init(>mt_ext);
> > +
> > +   INIT_LIST_HEAD(>evict.list);
> > +   spin_lock_init(>evict.lock);
> > +
> > drm_gpuva_check_overflow(start_offset, range);
> > mgr->mm_start = start_offset;
> > mgr->mm_range = range;
> > @@ -694,6 +701,9 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> >  reserve_range)))
> > __drm_gpuva_insert(mgr, >kernel_alloc_node);
> > }
> > +
> > +   drm_gem_private_object_init(drm, >d_obj, 0);
> > +   mgr->resv = mgr->d_obj.resv;
> >   }
> >   EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
> > @@ -713,10 +723,575 @@ drm_gpuva_manager_destroy(struct drm_gpuva_manager 
> > *mgr)
> > __drm_gpuva_remove(>kernel_alloc_node);
> > 

Re: [Nouveau] [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation

2023-08-30 Thread Danilo Krummrich
Hi Thomas,

thanks for having a look!

On Wed, Aug 30, 2023 at 09:27:45AM +0200, Thomas Hellström (Intel) wrote:
> Hi, Danilo.
> 
> Some quick comments since I'm doing some Xe work in this area. Will probably
> get back with more.
> 
> On 8/20/23 23:53, Danilo Krummrich wrote:
> > So far the DRM GPUVA manager offers common infrastructure to track GPU VA
> > allocations and mappings, generically connect GPU VA mappings to their
> > backing buffers and perform more complex mapping operations on the GPU VA
> > space.
> > 
> > However, there are more design patterns commonly used by drivers, which
> > can potentially be generalized in order to make the DRM GPUVA manager
> > represent a basic GPU-VM implementation. In this context, this patch aims
> > at generalizing the following elements.
> > 
> > 1) Provide a common dma-resv for GEM objects not being used outside of
> > this GPU-VM.
> > 
> > 2) Provide tracking of external GEM objects (GEM objects which are
> > shared with other GPU-VMs).
> > 
> > 3) Provide functions to efficiently lock all GEM objects dma-resv the
> > GPU-VM contains mappings of.
> > 
> > 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
> > of, such that validation of evicted GEM objects is accelerated.
> > 
> > 5) Provide some convinience functions for common patterns.
> > 
> > Rather than being designed as a "framework", the target is to make all
> > features appear as a collection of optional helper functions, such that
> > drivers are free to make use of the DRM GPUVA managers basic
> > functionality and opt-in for other features without setting any feature
> > flags, just by making use of the corresponding functions.
> > 
> > Signed-off-by: Danilo Krummrich 
> > ---
> >   drivers/gpu/drm/drm_gpuva_mgr.c | 688 +++-
> >   include/drm/drm_gem.h   |  48 ++-
> >   include/drm/drm_gpuva_mgr.h | 302 +-
> >   3 files changed, 1010 insertions(+), 28 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c 
> > b/drivers/gpu/drm/drm_gpuva_mgr.c
> > index f86bfad74ff8..69872b205961 100644
> > --- a/drivers/gpu/drm/drm_gpuva_mgr.c
> > +++ b/drivers/gpu/drm/drm_gpuva_mgr.c
> > @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> >   /**
> >* drm_gpuva_manager_init() - initialize a _gpuva_manager
> >* @mgr: pointer to the _gpuva_manager to initialize
> > + * @drm: the drivers _device
> >* @name: the name of the GPU VA space
> >* @start_offset: the start offset of the GPU VA space
> >* @range: the size of the GPU VA space
> > @@ -669,6 +670,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> >*/
> >   void
> >   drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> > +  struct drm_device *drm,
> >const char *name,
> >u64 start_offset, u64 range,
> >u64 reserve_offset, u64 reserve_range,
> > @@ -677,6 +679,11 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> > mgr->rb.tree = RB_ROOT_CACHED;
> > INIT_LIST_HEAD(>rb.list);
> > +   mt_init(>mt_ext);
> > +
> > +   INIT_LIST_HEAD(>evict.list);
> > +   spin_lock_init(>evict.lock);
> > +
> > drm_gpuva_check_overflow(start_offset, range);
> > mgr->mm_start = start_offset;
> > mgr->mm_range = range;
> > @@ -694,6 +701,9 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> >  reserve_range)))
> > __drm_gpuva_insert(mgr, >kernel_alloc_node);
> > }
> > +
> > +   drm_gem_private_object_init(drm, >d_obj, 0);
> > +   mgr->resv = mgr->d_obj.resv;
> >   }
> >   EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
> > @@ -713,10 +723,575 @@ drm_gpuva_manager_destroy(struct drm_gpuva_manager 
> > *mgr)
> > __drm_gpuva_remove(>kernel_alloc_node);
> > WARN(!RB_EMPTY_ROOT(>rb.tree.rb_root),
> > -"GPUVA tree is not empty, potentially leaking memory.");
> > +"GPUVA tree is not empty, potentially leaking memory.\n");
> > +
> > +   mtree_destroy(>mt_ext);
> > +   WARN(!list_empty(>evict.list), "Evict list should be empty.\n");
> > +
> > +   drm_gem_private_object_fini(>d_obj);
> >   }
> >   EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy);
> > +/**
> > + * drm_gpuva_manager_prepare_objects() - prepare all assoiciated BOs
> > + * @mgr: the _gpuva_manager
> > + * @num_fences: the amount of _fences to reserve
> > + *
> > + * Calls drm_exec_prepare_obj() for all _gem_objects the given
> > + * _gpuva_manager contains mappings of.
> > + *
> > + * Drivers can obtain the corresponding _exec instance through
> > + * DRM_GPUVA_EXEC(). It is the drivers responsibility to call 
> > drm_exec_init()
> > + * and drm_exec_fini() accordingly.
> > + *
> > + * Returns: 0 on success, negative error code on failure.
> > + */
> > +int
> > +drm_gpuva_manager_prepare_objects(struct drm_gpuva_manager *mgr,
> > + unsigned 

Re: [Nouveau] [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation

2023-08-30 Thread Christian König




Am 20.08.23 um 23:53 schrieb Danilo Krummrich:

So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.

However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used outside of
this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which are
shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv the
GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
of, such that validation of evicted GEM objects is accelerated.

5) Provide some convinience functions for common patterns.


Interesting work.

You basically implement a bunch of the ideas I came up to improve the 
amdgpu performance in the common manager now. The was one of the 
remaining blockers I had for using this in amdgpu.


Question is for example how do you track evictions? E.g. we don't have a 
common concept of eviction in GEM as far as I know. Or is the driver 
responsible for giving those notifications to the GPUVA manager?


And would it be possible to lock only a specific area of the VM, e.g. 
every BO mapped in the interval X..Y?


Regards,
Christian.



Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.

Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/drm_gpuva_mgr.c | 688 +++-
  include/drm/drm_gem.h   |  48 ++-
  include/drm/drm_gpuva_mgr.h | 302 +-
  3 files changed, 1010 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c
index f86bfad74ff8..69872b205961 100644
--- a/drivers/gpu/drm/drm_gpuva_mgr.c
+++ b/drivers/gpu/drm/drm_gpuva_mgr.c
@@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
  /**
   * drm_gpuva_manager_init() - initialize a _gpuva_manager
   * @mgr: pointer to the _gpuva_manager to initialize
+ * @drm: the drivers _device
   * @name: the name of the GPU VA space
   * @start_offset: the start offset of the GPU VA space
   * @range: the size of the GPU VA space
@@ -669,6 +670,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
   */
  void
  drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
+  struct drm_device *drm,
   const char *name,
   u64 start_offset, u64 range,
   u64 reserve_offset, u64 reserve_range,
@@ -677,6 +679,11 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
mgr->rb.tree = RB_ROOT_CACHED;
INIT_LIST_HEAD(>rb.list);
  
+	mt_init(>mt_ext);

+
+   INIT_LIST_HEAD(>evict.list);
+   spin_lock_init(>evict.lock);
+
drm_gpuva_check_overflow(start_offset, range);
mgr->mm_start = start_offset;
mgr->mm_range = range;
@@ -694,6 +701,9 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
 reserve_range)))
__drm_gpuva_insert(mgr, >kernel_alloc_node);
}
+
+   drm_gem_private_object_init(drm, >d_obj, 0);
+   mgr->resv = mgr->d_obj.resv;
  }
  EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
  
@@ -713,10 +723,575 @@ drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr)

__drm_gpuva_remove(>kernel_alloc_node);
  
  	WARN(!RB_EMPTY_ROOT(>rb.tree.rb_root),

-"GPUVA tree is not empty, potentially leaking memory.");
+"GPUVA tree is not empty, potentially leaking memory.\n");
+
+   mtree_destroy(>mt_ext);
+   WARN(!list_empty(>evict.list), "Evict list should be empty.\n");
+
+   drm_gem_private_object_fini(>d_obj);
  }
  EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy);
  
+/**

+ * drm_gpuva_manager_prepare_objects() - prepare all assoiciated BOs
+ * @mgr: the _gpuva_manager
+ * @num_fences: the amount of _fences to reserve
+ *
+ * Calls drm_exec_prepare_obj() for all _gem_objects the given
+ * _gpuva_manager contains mappings of.
+ *
+ * Drivers can obtain the corresponding _exec instance through
+ * DRM_GPUVA_EXEC(). It is the drivers responsibility to call drm_exec_init()
+ * and drm_exec_fini() accordingly.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuva_manager_prepare_objects(struct drm_gpuva_manager *mgr,
+ unsigned int 

Re: [Nouveau] [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation

2023-08-30 Thread Intel

Hi, Danilo.

Some quick comments since I'm doing some Xe work in this area. Will 
probably get back with more.


On 8/20/23 23:53, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.

However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used outside of
this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which are
shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv the
GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
of, such that validation of evicted GEM objects is accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.

Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/drm_gpuva_mgr.c | 688 +++-
  include/drm/drm_gem.h   |  48 ++-
  include/drm/drm_gpuva_mgr.h | 302 +-
  3 files changed, 1010 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c
index f86bfad74ff8..69872b205961 100644
--- a/drivers/gpu/drm/drm_gpuva_mgr.c
+++ b/drivers/gpu/drm/drm_gpuva_mgr.c
@@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
  /**
   * drm_gpuva_manager_init() - initialize a _gpuva_manager
   * @mgr: pointer to the _gpuva_manager to initialize
+ * @drm: the drivers _device
   * @name: the name of the GPU VA space
   * @start_offset: the start offset of the GPU VA space
   * @range: the size of the GPU VA space
@@ -669,6 +670,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
   */
  void
  drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
+  struct drm_device *drm,
   const char *name,
   u64 start_offset, u64 range,
   u64 reserve_offset, u64 reserve_range,
@@ -677,6 +679,11 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
mgr->rb.tree = RB_ROOT_CACHED;
INIT_LIST_HEAD(>rb.list);
  
+	mt_init(>mt_ext);

+
+   INIT_LIST_HEAD(>evict.list);
+   spin_lock_init(>evict.lock);
+
drm_gpuva_check_overflow(start_offset, range);
mgr->mm_start = start_offset;
mgr->mm_range = range;
@@ -694,6 +701,9 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
 reserve_range)))
__drm_gpuva_insert(mgr, >kernel_alloc_node);
}
+
+   drm_gem_private_object_init(drm, >d_obj, 0);
+   mgr->resv = mgr->d_obj.resv;
  }
  EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
  
@@ -713,10 +723,575 @@ drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr)

__drm_gpuva_remove(>kernel_alloc_node);
  
  	WARN(!RB_EMPTY_ROOT(>rb.tree.rb_root),

-"GPUVA tree is not empty, potentially leaking memory.");
+"GPUVA tree is not empty, potentially leaking memory.\n");
+
+   mtree_destroy(>mt_ext);
+   WARN(!list_empty(>evict.list), "Evict list should be empty.\n");
+
+   drm_gem_private_object_fini(>d_obj);
  }
  EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy);
  
+/**

+ * drm_gpuva_manager_prepare_objects() - prepare all assoiciated BOs
+ * @mgr: the _gpuva_manager
+ * @num_fences: the amount of _fences to reserve
+ *
+ * Calls drm_exec_prepare_obj() for all _gem_objects the given
+ * _gpuva_manager contains mappings of.
+ *
+ * Drivers can obtain the corresponding _exec instance through
+ * DRM_GPUVA_EXEC(). It is the drivers responsibility to call drm_exec_init()
+ * and drm_exec_fini() accordingly.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuva_manager_prepare_objects(struct drm_gpuva_manager *mgr,
+ unsigned int num_fences)
+{
+   struct drm_exec *exec = DRM_GPUVA_EXEC(mgr);
+   MA_STATE(mas, >mt_ext, 0, 0);
+   union {
+   void *ptr;
+   uintptr_t cnt;
+   } ref;
+   int ret;
+
+   ret = drm_exec_prepare_obj(exec, >d_obj, num_fences);
+   if (ret)
+   goto out;
+
+   rcu_read_lock();
In xe we're protecting the external object list with an outer lock, 
(same as protecting the mgr itself).