Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-11-01 Thread Danilo Krummrich

On 11/1/23 20:45, Thomas Hellström wrote:

On Wed, 2023-11-01 at 18:21 +0100, Danilo Krummrich wrote:

On 11/1/23 17:38, Thomas Hellström wrote:

On Tue, 2023-10-31 at 18:38 +0100, Danilo Krummrich wrote:

On 10/31/23 11:32, Thomas Hellström wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

Add an abstraction layer between the drm_gpuva mappings of a
particular
drm_gem_object and this GEM object itself. The abstraction
represents
a
combination of a drm_gem_object and drm_gpuvm. The
drm_gem_object
holds
a list of drm_gpuvm_bo structures (the structure representing
this
abstraction), while each drm_gpuvm_bo contains list of
mappings
of
this
GEM object.

This has multiple advantages:

1) We can use the drm_gpuvm_bo structure to attach it to
various
lists
  of the drm_gpuvm. This is useful for tracking external
and
evicted
  objects per VM, which is introduced in subsequent
patches.

2) Finding mappings of a certain drm_gem_object mapped in a
certain
  drm_gpuvm becomes much cheaper.

3) Drivers can derive and extend the structure to easily
represent
  driver specific states of a BO for a certain GPUVM.

The idea of this abstraction was taken from amdgpu, hence the
credit
for
this idea goes to the developers of amdgpu.

Cc: Christian König 
Signed-off-by: Danilo Krummrich 
---
    drivers/gpu/drm/drm_gpuvm.c    | 335
+--
--
    drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
    include/drm/drm_gem.h  |  32 +--
    include/drm/drm_gpuvm.h    | 188
+-
    4 files changed, 533 insertions(+), 86 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index c03332883432..7f4f5919f84c 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -70,6 +70,18 @@
     * _gem_object, such as the _gem_object containing
the
root
page table,
     * but it can also be a 'dummy' object, which can be
allocated
with
     * drm_gpuvm_resv_object_alloc().
+ *
+ * In order to connect a struct drm_gpuva its backing
_gem_object each
+ * _gem_object maintains a list of _gpuvm_bo
structures,
and
each
+ * _gpuvm_bo contains a list of _gpuva structures.
+ *
+ * A _gpuvm_bo is an abstraction that represents a
combination
of a
+ * _gpuvm and a _gem_object. Every such combination
should
be unique.
+ * This is ensured by the API through drm_gpuvm_bo_obtain()
and
+ * drm_gpuvm_bo_obtain_prealloc() which first look into the
corresponding
+ * _gem_object list of _gpuvm_bos for an existing
instance
of this
+ * particular combination. If not existent a new instance is
created
and linked
+ * to the _gem_object.
     */

    /**

@@ -395,21 +407,28 @@
    /**
     * DOC: Locking
     *
- * Generally, the GPU VA manager does not take care of
locking
itself, it is
- * the drivers responsibility to take care about locking.
Drivers
might want to
- * protect the following operations: inserting, removing and
iterating
- * _gpuva objects as well as generating all kinds of
operations,
such as
- * split / merge or prefetch.
- *
- * The GPU VA manager also does not take care of the locking
of
the
backing
- * _gem_object buffers GPU VA lists by itself; drivers
are
responsible to
- * enforce mutual exclusion using either the GEMs dma_resv
lock
or
alternatively
- * a driver specific external lock. For the latter see also
- * drm_gem_gpuva_set_lock().
- *
- * However, the GPU VA manager contains lockdep checks to
ensure
callers of its
- * API hold the corresponding lock whenever the
_gem_objects
GPU
VA list is
- * accessed by functions such as drm_gpuva_link() or
drm_gpuva_unlink().
+ * In terms of managing _gpuva entries DRM GPUVM does
not
take
care of
+ * locking itself, it is the drivers responsibility to take
care
about locking.
+ * Drivers might want to protect the following operations:
inserting, removing
+ * and iterating _gpuva objects as well as generating
all
kinds
of
+ * operations, such as split / merge or prefetch.
+ *
+ * DRM GPUVM also does not take care of the locking of the
backing
+ * _gem_object buffers GPU VA lists and _gpuvm_bo
abstractions by
+ * itself; drivers are responsible to enforce mutual
exclusion
using
either the
+ * GEMs dma_resv lock or alternatively a driver specific
external
lock. For the
+ * latter see also drm_gem_gpuva_set_lock().
+ *
+ * However, DRM GPUVM contains lockdep checks to ensure
callers
of
its API hold
+ * the corresponding lock whenever the _gem_objects GPU
VA
list
is accessed
+ * by functions such as drm_gpuva_link() or
drm_gpuva_unlink(),
but
also
+ * drm_gpuvm_bo_obtain() and drm_gpuvm_bo_put().
+ *
+ * The latter is required since on creation and destruction
of a
_gpuvm_bo
+ * the _gpuvm_bo is attached / removed from the
_gem_objects
gpuva list.
+ * Subsequent calls to drm_gpuvm_bo_obtain() for the same
_gpuvm
and
+ * _gem_object must be able to observe previous
creations
and
destructions
+ * of _gpuvm_bos in order to keep instances unique.
     */
   

Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-11-01 Thread Thomas Hellström
On Wed, 2023-11-01 at 18:21 +0100, Danilo Krummrich wrote:
> On 11/1/23 17:38, Thomas Hellström wrote:
> > On Tue, 2023-10-31 at 18:38 +0100, Danilo Krummrich wrote:
> > > On 10/31/23 11:32, Thomas Hellström wrote:
> > > > On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> > > > > Add an abstraction layer between the drm_gpuva mappings of a
> > > > > particular
> > > > > drm_gem_object and this GEM object itself. The abstraction
> > > > > represents
> > > > > a
> > > > > combination of a drm_gem_object and drm_gpuvm. The
> > > > > drm_gem_object
> > > > > holds
> > > > > a list of drm_gpuvm_bo structures (the structure representing
> > > > > this
> > > > > abstraction), while each drm_gpuvm_bo contains list of
> > > > > mappings
> > > > > of
> > > > > this
> > > > > GEM object.
> > > > > 
> > > > > This has multiple advantages:
> > > > > 
> > > > > 1) We can use the drm_gpuvm_bo structure to attach it to
> > > > > various
> > > > > lists
> > > > >  of the drm_gpuvm. This is useful for tracking external
> > > > > and
> > > > > evicted
> > > > >  objects per VM, which is introduced in subsequent
> > > > > patches.
> > > > > 
> > > > > 2) Finding mappings of a certain drm_gem_object mapped in a
> > > > > certain
> > > > >  drm_gpuvm becomes much cheaper.
> > > > > 
> > > > > 3) Drivers can derive and extend the structure to easily
> > > > > represent
> > > > >  driver specific states of a BO for a certain GPUVM.
> > > > > 
> > > > > The idea of this abstraction was taken from amdgpu, hence the
> > > > > credit
> > > > > for
> > > > > this idea goes to the developers of amdgpu.
> > > > > 
> > > > > Cc: Christian König 
> > > > > Signed-off-by: Danilo Krummrich 
> > > > > ---
> > > > >    drivers/gpu/drm/drm_gpuvm.c    | 335
> > > > > +--
> > > > > --
> > > > >    drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
> > > > >    include/drm/drm_gem.h  |  32 +--
> > > > >    include/drm/drm_gpuvm.h    | 188
> > > > > +-
> > > > >    4 files changed, 533 insertions(+), 86 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/gpu/drm/drm_gpuvm.c
> > > > > b/drivers/gpu/drm/drm_gpuvm.c
> > > > > index c03332883432..7f4f5919f84c 100644
> > > > > --- a/drivers/gpu/drm/drm_gpuvm.c
> > > > > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > > > > @@ -70,6 +70,18 @@
> > > > >     * _gem_object, such as the _gem_object containing
> > > > > the
> > > > > root
> > > > > page table,
> > > > >     * but it can also be a 'dummy' object, which can be
> > > > > allocated
> > > > > with
> > > > >     * drm_gpuvm_resv_object_alloc().
> > > > > + *
> > > > > + * In order to connect a struct drm_gpuva its backing
> > > > > _gem_object each
> > > > > + * _gem_object maintains a list of _gpuvm_bo
> > > > > structures,
> > > > > and
> > > > > each
> > > > > + * _gpuvm_bo contains a list of _gpuva structures.
> > > > > + *
> > > > > + * A _gpuvm_bo is an abstraction that represents a
> > > > > combination
> > > > > of a
> > > > > + * _gpuvm and a _gem_object. Every such combination
> > > > > should
> > > > > be unique.
> > > > > + * This is ensured by the API through drm_gpuvm_bo_obtain()
> > > > > and
> > > > > + * drm_gpuvm_bo_obtain_prealloc() which first look into the
> > > > > corresponding
> > > > > + * _gem_object list of _gpuvm_bos for an existing
> > > > > instance
> > > > > of this
> > > > > + * particular combination. If not existent a new instance is
> > > > > created
> > > > > and linked
> > > > > + * to the _gem_object.
> > > > >     */
> > > > >    
> > > > >    /**
> > > > > @@ -395,21 +407,28 @@
> > > > >    /**
> > > > >     * DOC: Locking
> > > > >     *
> > > > > - * Generally, the GPU VA manager does not take care of
> > > > > locking
> > > > > itself, it is
> > > > > - * the drivers responsibility to take care about locking.
> > > > > Drivers
> > > > > might want to
> > > > > - * protect the following operations: inserting, removing and
> > > > > iterating
> > > > > - * _gpuva objects as well as generating all kinds of
> > > > > operations,
> > > > > such as
> > > > > - * split / merge or prefetch.
> > > > > - *
> > > > > - * The GPU VA manager also does not take care of the locking
> > > > > of
> > > > > the
> > > > > backing
> > > > > - * _gem_object buffers GPU VA lists by itself; drivers
> > > > > are
> > > > > responsible to
> > > > > - * enforce mutual exclusion using either the GEMs dma_resv
> > > > > lock
> > > > > or
> > > > > alternatively
> > > > > - * a driver specific external lock. For the latter see also
> > > > > - * drm_gem_gpuva_set_lock().
> > > > > - *
> > > > > - * However, the GPU VA manager contains lockdep checks to
> > > > > ensure
> > > > > callers of its
> > > > > - * API hold the corresponding lock whenever the
> > > > > _gem_objects
> > > > > GPU
> > > > > VA list is
> > > > > - * accessed by functions such as drm_gpuva_link() or
> > > > > drm_gpuva_unlink().
> > > > > + * In terms of managing _gpuva 

Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-11-01 Thread Danilo Krummrich

On 11/1/23 10:56, Thomas Hellström wrote:

On Wed, 2023-11-01 at 10:41 +0100, Thomas Hellström wrote:

Hi, Danilo,

On Tue, 2023-10-31 at 18:52 +0100, Danilo Krummrich wrote:

On 10/31/23 17:45, Thomas Hellström wrote:

On Tue, 2023-10-31 at 17:39 +0100, Danilo Krummrich wrote:

On 10/31/23 12:25, Thomas Hellström wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

Add an abstraction layer between the drm_gpuva mappings of
a
particular
drm_gem_object and this GEM object itself. The abstraction
represents
a
combination of a drm_gem_object and drm_gpuvm. The
drm_gem_object
holds
a list of drm_gpuvm_bo structures (the structure
representing
this
abstraction), while each drm_gpuvm_bo contains list of
mappings
of
this
GEM object.

This has multiple advantages:

1) We can use the drm_gpuvm_bo structure to attach it to
various
lists
  of the drm_gpuvm. This is useful for tracking external
and
evicted
  objects per VM, which is introduced in subsequent
patches.

2) Finding mappings of a certain drm_gem_object mapped in a
certain
  drm_gpuvm becomes much cheaper.

3) Drivers can derive and extend the structure to easily
represent
  driver specific states of a BO for a certain GPUVM.

The idea of this abstraction was taken from amdgpu, hence
the
credit
for
this idea goes to the developers of amdgpu.

Cc: Christian König 
Signed-off-by: Danilo Krummrich 
---
    drivers/gpu/drm/drm_gpuvm.c    | 335
+--
--
    drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
    include/drm/drm_gem.h  |  32 +--
    include/drm/drm_gpuvm.h    | 188
+-
    4 files changed, 533 insertions(+), 86 deletions(-)


That checkpatch.pl error still remains as well.


I guess you refer to:

ERROR: do not use assignment in if condition
#633: FILE: drivers/gpu/drm/nouveau/nouveau_uvmm.c:1165:
+   if (!(op->gem.obj = obj))

This was an intentional decision, since in this specific case
it
seems to
be more readable than the alternatives.

However, if we consider this to be a hard rule, which we never
ever
break,
I'm fine changing it too.


With the errors, sooner or later they are going to start generate
patches to "fix" them. In this particular case also Xe CI is
complaining and abort building when I submit the Xe adaptation,
so
it'd
be good to be checkpatch.pl conformant IMHO.


Ok, I will change this one.

However, in general my opinion on coding style is that we should
preserve us
the privilege to deviate from it when we agree it makes sense and
improves
the code quality.

Having a CI forcing people to *blindly* follow certain rules and
even
abort
building isn't very beneficial in that respect.

Also, consider patches which partially change a line of code that
already
contains a coding style "issue" - the CI would also block you on
that
one I
guess. Besides that it seems to block you on unrelated code, note
that the
assignment in question is from Nouveau and not from GPUVM.


Yes, I completely agree that having CI enforce error free coding
style
checks is bad, and I'll see if I can get that changed on Xe CI. To my
Knowledge It hasn't always been like that.

But OTOH my take on this is that if there are coding style rules and
recommendations we should try to follow them unless there are
*strong*
reasons not to. Sometimes that may result in code that may be a
little
harder to read, but OTOH a reviewer won't have to read up on the
component's style flavor before reviewing and it will avoid future
style fix patches.


Basically meaning I'll continue to point those out when reviewing in
case the author made an oversight, but won't require fixing for an R-B
if the component owner thinks otherwise.


Yeah, I fully agree on that. That's why I changed it. I still think it was
better as it was, but clearly way too minor to break the rules.

- Danilo



Thanks,
Thomas



Thanks,
Thomas




- Danilo



Thanks,
Thomas








Thanks,
Thomas















Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-11-01 Thread Danilo Krummrich

On 11/1/23 17:38, Thomas Hellström wrote:

On Tue, 2023-10-31 at 18:38 +0100, Danilo Krummrich wrote:

On 10/31/23 11:32, Thomas Hellström wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

Add an abstraction layer between the drm_gpuva mappings of a
particular
drm_gem_object and this GEM object itself. The abstraction
represents
a
combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
holds
a list of drm_gpuvm_bo structures (the structure representing
this
abstraction), while each drm_gpuvm_bo contains list of mappings
of
this
GEM object.

This has multiple advantages:

1) We can use the drm_gpuvm_bo structure to attach it to various
lists
     of the drm_gpuvm. This is useful for tracking external and
evicted
     objects per VM, which is introduced in subsequent patches.

2) Finding mappings of a certain drm_gem_object mapped in a
certain
     drm_gpuvm becomes much cheaper.

3) Drivers can derive and extend the structure to easily
represent
     driver specific states of a BO for a certain GPUVM.

The idea of this abstraction was taken from amdgpu, hence the
credit
for
this idea goes to the developers of amdgpu.

Cc: Christian König 
Signed-off-by: Danilo Krummrich 
---
   drivers/gpu/drm/drm_gpuvm.c    | 335
+--
--
   drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
   include/drm/drm_gem.h  |  32 +--
   include/drm/drm_gpuvm.h    | 188 +-
   4 files changed, 533 insertions(+), 86 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index c03332883432..7f4f5919f84c 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -70,6 +70,18 @@
    * _gem_object, such as the _gem_object containing the
root
page table,
    * but it can also be a 'dummy' object, which can be allocated
with
    * drm_gpuvm_resv_object_alloc().
+ *
+ * In order to connect a struct drm_gpuva its backing
_gem_object each
+ * _gem_object maintains a list of _gpuvm_bo structures,
and
each
+ * _gpuvm_bo contains a list of _gpuva structures.
+ *
+ * A _gpuvm_bo is an abstraction that represents a
combination
of a
+ * _gpuvm and a _gem_object. Every such combination
should
be unique.
+ * This is ensured by the API through drm_gpuvm_bo_obtain() and
+ * drm_gpuvm_bo_obtain_prealloc() which first look into the
corresponding
+ * _gem_object list of _gpuvm_bos for an existing
instance
of this
+ * particular combination. If not existent a new instance is
created
and linked
+ * to the _gem_object.
    */
   
   /**

@@ -395,21 +407,28 @@
   /**
    * DOC: Locking
    *
- * Generally, the GPU VA manager does not take care of locking
itself, it is
- * the drivers responsibility to take care about locking.
Drivers
might want to
- * protect the following operations: inserting, removing and
iterating
- * _gpuva objects as well as generating all kinds of
operations,
such as
- * split / merge or prefetch.
- *
- * The GPU VA manager also does not take care of the locking of
the
backing
- * _gem_object buffers GPU VA lists by itself; drivers are
responsible to
- * enforce mutual exclusion using either the GEMs dma_resv lock
or
alternatively
- * a driver specific external lock. For the latter see also
- * drm_gem_gpuva_set_lock().
- *
- * However, the GPU VA manager contains lockdep checks to ensure
callers of its
- * API hold the corresponding lock whenever the _gem_objects
GPU
VA list is
- * accessed by functions such as drm_gpuva_link() or
drm_gpuva_unlink().
+ * In terms of managing _gpuva entries DRM GPUVM does not
take
care of
+ * locking itself, it is the drivers responsibility to take care
about locking.
+ * Drivers might want to protect the following operations:
inserting, removing
+ * and iterating _gpuva objects as well as generating all
kinds
of
+ * operations, such as split / merge or prefetch.
+ *
+ * DRM GPUVM also does not take care of the locking of the
backing
+ * _gem_object buffers GPU VA lists and _gpuvm_bo
abstractions by
+ * itself; drivers are responsible to enforce mutual exclusion
using
either the
+ * GEMs dma_resv lock or alternatively a driver specific
external
lock. For the
+ * latter see also drm_gem_gpuva_set_lock().
+ *
+ * However, DRM GPUVM contains lockdep checks to ensure callers
of
its API hold
+ * the corresponding lock whenever the _gem_objects GPU VA
list
is accessed
+ * by functions such as drm_gpuva_link() or drm_gpuva_unlink(),
but
also
+ * drm_gpuvm_bo_obtain() and drm_gpuvm_bo_put().
+ *
+ * The latter is required since on creation and destruction of a
_gpuvm_bo
+ * the _gpuvm_bo is attached / removed from the
_gem_objects
gpuva list.
+ * Subsequent calls to drm_gpuvm_bo_obtain() for the same
_gpuvm
and
+ * _gem_object must be able to observe previous creations
and
destructions
+ * of _gpuvm_bos in order to keep instances unique.
    */
   
   /**

@@ -439,6 +458,7 @@
    * {
    * struct drm_gpuva_ops *ops;
    * struct 

Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-11-01 Thread Thomas Hellström
On Tue, 2023-10-31 at 18:38 +0100, Danilo Krummrich wrote:
> On 10/31/23 11:32, Thomas Hellström wrote:
> > On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> > > Add an abstraction layer between the drm_gpuva mappings of a
> > > particular
> > > drm_gem_object and this GEM object itself. The abstraction
> > > represents
> > > a
> > > combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
> > > holds
> > > a list of drm_gpuvm_bo structures (the structure representing
> > > this
> > > abstraction), while each drm_gpuvm_bo contains list of mappings
> > > of
> > > this
> > > GEM object.
> > > 
> > > This has multiple advantages:
> > > 
> > > 1) We can use the drm_gpuvm_bo structure to attach it to various
> > > lists
> > >     of the drm_gpuvm. This is useful for tracking external and
> > > evicted
> > >     objects per VM, which is introduced in subsequent patches.
> > > 
> > > 2) Finding mappings of a certain drm_gem_object mapped in a
> > > certain
> > >     drm_gpuvm becomes much cheaper.
> > > 
> > > 3) Drivers can derive and extend the structure to easily
> > > represent
> > >     driver specific states of a BO for a certain GPUVM.
> > > 
> > > The idea of this abstraction was taken from amdgpu, hence the
> > > credit
> > > for
> > > this idea goes to the developers of amdgpu.
> > > 
> > > Cc: Christian König 
> > > Signed-off-by: Danilo Krummrich 
> > > ---
> > >   drivers/gpu/drm/drm_gpuvm.c    | 335
> > > +--
> > > --
> > >   drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
> > >   include/drm/drm_gem.h  |  32 +--
> > >   include/drm/drm_gpuvm.h    | 188 +-
> > >   4 files changed, 533 insertions(+), 86 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_gpuvm.c
> > > b/drivers/gpu/drm/drm_gpuvm.c
> > > index c03332883432..7f4f5919f84c 100644
> > > --- a/drivers/gpu/drm/drm_gpuvm.c
> > > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > > @@ -70,6 +70,18 @@
> > >    * _gem_object, such as the _gem_object containing the
> > > root
> > > page table,
> > >    * but it can also be a 'dummy' object, which can be allocated
> > > with
> > >    * drm_gpuvm_resv_object_alloc().
> > > + *
> > > + * In order to connect a struct drm_gpuva its backing
> > > _gem_object each
> > > + * _gem_object maintains a list of _gpuvm_bo structures,
> > > and
> > > each
> > > + * _gpuvm_bo contains a list of _gpuva structures.
> > > + *
> > > + * A _gpuvm_bo is an abstraction that represents a
> > > combination
> > > of a
> > > + * _gpuvm and a _gem_object. Every such combination
> > > should
> > > be unique.
> > > + * This is ensured by the API through drm_gpuvm_bo_obtain() and
> > > + * drm_gpuvm_bo_obtain_prealloc() which first look into the
> > > corresponding
> > > + * _gem_object list of _gpuvm_bos for an existing
> > > instance
> > > of this
> > > + * particular combination. If not existent a new instance is
> > > created
> > > and linked
> > > + * to the _gem_object.
> > >    */
> > >   
> > >   /**
> > > @@ -395,21 +407,28 @@
> > >   /**
> > >    * DOC: Locking
> > >    *
> > > - * Generally, the GPU VA manager does not take care of locking
> > > itself, it is
> > > - * the drivers responsibility to take care about locking.
> > > Drivers
> > > might want to
> > > - * protect the following operations: inserting, removing and
> > > iterating
> > > - * _gpuva objects as well as generating all kinds of
> > > operations,
> > > such as
> > > - * split / merge or prefetch.
> > > - *
> > > - * The GPU VA manager also does not take care of the locking of
> > > the
> > > backing
> > > - * _gem_object buffers GPU VA lists by itself; drivers are
> > > responsible to
> > > - * enforce mutual exclusion using either the GEMs dma_resv lock
> > > or
> > > alternatively
> > > - * a driver specific external lock. For the latter see also
> > > - * drm_gem_gpuva_set_lock().
> > > - *
> > > - * However, the GPU VA manager contains lockdep checks to ensure
> > > callers of its
> > > - * API hold the corresponding lock whenever the _gem_objects
> > > GPU
> > > VA list is
> > > - * accessed by functions such as drm_gpuva_link() or
> > > drm_gpuva_unlink().
> > > + * In terms of managing _gpuva entries DRM GPUVM does not
> > > take
> > > care of
> > > + * locking itself, it is the drivers responsibility to take care
> > > about locking.
> > > + * Drivers might want to protect the following operations:
> > > inserting, removing
> > > + * and iterating _gpuva objects as well as generating all
> > > kinds
> > > of
> > > + * operations, such as split / merge or prefetch.
> > > + *
> > > + * DRM GPUVM also does not take care of the locking of the
> > > backing
> > > + * _gem_object buffers GPU VA lists and _gpuvm_bo
> > > abstractions by
> > > + * itself; drivers are responsible to enforce mutual exclusion
> > > using
> > > either the
> > > + * GEMs dma_resv lock or alternatively a driver specific
> > > external
> > > lock. For the
> > > + * latter see also 

Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-11-01 Thread Thomas Hellström
On Wed, 2023-11-01 at 10:41 +0100, Thomas Hellström wrote:
> Hi, Danilo,
> 
> On Tue, 2023-10-31 at 18:52 +0100, Danilo Krummrich wrote:
> > On 10/31/23 17:45, Thomas Hellström wrote:
> > > On Tue, 2023-10-31 at 17:39 +0100, Danilo Krummrich wrote:
> > > > On 10/31/23 12:25, Thomas Hellström wrote:
> > > > > On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> > > > > > Add an abstraction layer between the drm_gpuva mappings of
> > > > > > a
> > > > > > particular
> > > > > > drm_gem_object and this GEM object itself. The abstraction
> > > > > > represents
> > > > > > a
> > > > > > combination of a drm_gem_object and drm_gpuvm. The
> > > > > > drm_gem_object
> > > > > > holds
> > > > > > a list of drm_gpuvm_bo structures (the structure
> > > > > > representing
> > > > > > this
> > > > > > abstraction), while each drm_gpuvm_bo contains list of
> > > > > > mappings
> > > > > > of
> > > > > > this
> > > > > > GEM object.
> > > > > > 
> > > > > > This has multiple advantages:
> > > > > > 
> > > > > > 1) We can use the drm_gpuvm_bo structure to attach it to
> > > > > > various
> > > > > > lists
> > > > > >  of the drm_gpuvm. This is useful for tracking external
> > > > > > and
> > > > > > evicted
> > > > > >  objects per VM, which is introduced in subsequent
> > > > > > patches.
> > > > > > 
> > > > > > 2) Finding mappings of a certain drm_gem_object mapped in a
> > > > > > certain
> > > > > >  drm_gpuvm becomes much cheaper.
> > > > > > 
> > > > > > 3) Drivers can derive and extend the structure to easily
> > > > > > represent
> > > > > >  driver specific states of a BO for a certain GPUVM.
> > > > > > 
> > > > > > The idea of this abstraction was taken from amdgpu, hence
> > > > > > the
> > > > > > credit
> > > > > > for
> > > > > > this idea goes to the developers of amdgpu.
> > > > > > 
> > > > > > Cc: Christian König 
> > > > > > Signed-off-by: Danilo Krummrich 
> > > > > > ---
> > > > > >    drivers/gpu/drm/drm_gpuvm.c    | 335
> > > > > > +--
> > > > > > --
> > > > > >    drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
> > > > > >    include/drm/drm_gem.h  |  32 +--
> > > > > >    include/drm/drm_gpuvm.h    | 188
> > > > > > +-
> > > > > >    4 files changed, 533 insertions(+), 86 deletions(-)
> > > > > 
> > > > > That checkpatch.pl error still remains as well.
> > > > 
> > > > I guess you refer to:
> > > > 
> > > > ERROR: do not use assignment in if condition
> > > > #633: FILE: drivers/gpu/drm/nouveau/nouveau_uvmm.c:1165:
> > > > +   if (!(op->gem.obj = obj))
> > > > 
> > > > This was an intentional decision, since in this specific case
> > > > it
> > > > seems to
> > > > be more readable than the alternatives.
> > > > 
> > > > However, if we consider this to be a hard rule, which we never
> > > > ever
> > > > break,
> > > > I'm fine changing it too.
> > > 
> > > With the errors, sooner or later they are going to start generate
> > > patches to "fix" them. In this particular case also Xe CI is
> > > complaining and abort building when I submit the Xe adaptation,
> > > so
> > > it'd
> > > be good to be checkpatch.pl conformant IMHO.
> > 
> > Ok, I will change this one.
> > 
> > However, in general my opinion on coding style is that we should
> > preserve us
> > the privilege to deviate from it when we agree it makes sense and
> > improves
> > the code quality.
> > 
> > Having a CI forcing people to *blindly* follow certain rules and
> > even
> > abort
> > building isn't very beneficial in that respect.
> > 
> > Also, consider patches which partially change a line of code that
> > already
> > contains a coding style "issue" - the CI would also block you on
> > that
> > one I
> > guess. Besides that it seems to block you on unrelated code, note
> > that the
> > assignment in question is from Nouveau and not from GPUVM.
> 
> Yes, I completely agree that having CI enforce error free coding
> style
> checks is bad, and I'll see if I can get that changed on Xe CI. To my
> Knowledge It hasn't always been like that.
> 
> But OTOH my take on this is that if there are coding style rules and
> recommendations we should try to follow them unless there are
> *strong*
> reasons not to. Sometimes that may result in code that may be a
> little
> harder to read, but OTOH a reviewer won't have to read up on the
> component's style flavor before reviewing and it will avoid future
> style fix patches.

Basically meaning I'll continue to point those out when reviewing in
case the author made an oversight, but won't require fixing for an R-B
if the component owner thinks otherwise.

Thanks,
Thomas

> 
> Thanks,
> Thomas
> 
> 
> > 
> > - Danilo
> > 
> > > 
> > > Thanks,
> > > Thomas
> > > 
> > > 
> > > 
> > > 
> > > > 
> > > > > 
> > > > > Thanks,
> > > > > Thomas
> > > > > 
> > > > 
> > > 
> > 
> 



Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-11-01 Thread Thomas Hellström
Hi, Danilo,

On Tue, 2023-10-31 at 18:52 +0100, Danilo Krummrich wrote:
> On 10/31/23 17:45, Thomas Hellström wrote:
> > On Tue, 2023-10-31 at 17:39 +0100, Danilo Krummrich wrote:
> > > On 10/31/23 12:25, Thomas Hellström wrote:
> > > > On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> > > > > Add an abstraction layer between the drm_gpuva mappings of a
> > > > > particular
> > > > > drm_gem_object and this GEM object itself. The abstraction
> > > > > represents
> > > > > a
> > > > > combination of a drm_gem_object and drm_gpuvm. The
> > > > > drm_gem_object
> > > > > holds
> > > > > a list of drm_gpuvm_bo structures (the structure representing
> > > > > this
> > > > > abstraction), while each drm_gpuvm_bo contains list of
> > > > > mappings
> > > > > of
> > > > > this
> > > > > GEM object.
> > > > > 
> > > > > This has multiple advantages:
> > > > > 
> > > > > 1) We can use the drm_gpuvm_bo structure to attach it to
> > > > > various
> > > > > lists
> > > > >  of the drm_gpuvm. This is useful for tracking external
> > > > > and
> > > > > evicted
> > > > >  objects per VM, which is introduced in subsequent
> > > > > patches.
> > > > > 
> > > > > 2) Finding mappings of a certain drm_gem_object mapped in a
> > > > > certain
> > > > >  drm_gpuvm becomes much cheaper.
> > > > > 
> > > > > 3) Drivers can derive and extend the structure to easily
> > > > > represent
> > > > >  driver specific states of a BO for a certain GPUVM.
> > > > > 
> > > > > The idea of this abstraction was taken from amdgpu, hence the
> > > > > credit
> > > > > for
> > > > > this idea goes to the developers of amdgpu.
> > > > > 
> > > > > Cc: Christian König 
> > > > > Signed-off-by: Danilo Krummrich 
> > > > > ---
> > > > >    drivers/gpu/drm/drm_gpuvm.c    | 335
> > > > > +--
> > > > > --
> > > > >    drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
> > > > >    include/drm/drm_gem.h  |  32 +--
> > > > >    include/drm/drm_gpuvm.h    | 188
> > > > > +-
> > > > >    4 files changed, 533 insertions(+), 86 deletions(-)
> > > > 
> > > > That checkpatch.pl error still remains as well.
> > > 
> > > I guess you refer to:
> > > 
> > > ERROR: do not use assignment in if condition
> > > #633: FILE: drivers/gpu/drm/nouveau/nouveau_uvmm.c:1165:
> > > +   if (!(op->gem.obj = obj))
> > > 
> > > This was an intentional decision, since in this specific case it
> > > seems to
> > > be more readable than the alternatives.
> > > 
> > > However, if we consider this to be a hard rule, which we never
> > > ever
> > > break,
> > > I'm fine changing it too.
> > 
> > With the errors, sooner or later they are going to start generate
> > patches to "fix" them. In this particular case also Xe CI is
> > complaining and abort building when I submit the Xe adaptation, so
> > it'd
> > be good to be checkpatch.pl conformant IMHO.
> 
> Ok, I will change this one.
> 
> However, in general my opinion on coding style is that we should
> preserve us
> the privilege to deviate from it when we agree it makes sense and
> improves
> the code quality.
> 
> Having a CI forcing people to *blindly* follow certain rules and even
> abort
> building isn't very beneficial in that respect.
> 
> Also, consider patches which partially change a line of code that
> already
> contains a coding style "issue" - the CI would also block you on that
> one I
> guess. Besides that it seems to block you on unrelated code, note
> that the
> assignment in question is from Nouveau and not from GPUVM.

Yes, I completely agree that having CI enforce error free coding style
checks is bad, and I'll see if I can get that changed on Xe CI. To my
Knowledge It hasn't always been like that.

But OTOH my take on this is that if there are coding style rules and
recommendations we should try to follow them unless there are *strong*
reasons not to. Sometimes that may result in code that may be a little
harder to read, but OTOH a reviewer won't have to read up on the
component's style flavor before reviewing and it will avoid future
style fix patches.

Thanks,
Thomas


> 
> - Danilo
> 
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> > 
> > > 
> > > > 
> > > > Thanks,
> > > > Thomas
> > > > 
> > > 
> > 
> 



Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Danilo Krummrich

On 10/31/23 17:45, Thomas Hellström wrote:

On Tue, 2023-10-31 at 17:39 +0100, Danilo Krummrich wrote:

On 10/31/23 12:25, Thomas Hellström wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

Add an abstraction layer between the drm_gpuva mappings of a
particular
drm_gem_object and this GEM object itself. The abstraction
represents
a
combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
holds
a list of drm_gpuvm_bo structures (the structure representing
this
abstraction), while each drm_gpuvm_bo contains list of mappings
of
this
GEM object.

This has multiple advantages:

1) We can use the drm_gpuvm_bo structure to attach it to various
lists
     of the drm_gpuvm. This is useful for tracking external and
evicted
     objects per VM, which is introduced in subsequent patches.

2) Finding mappings of a certain drm_gem_object mapped in a
certain
     drm_gpuvm becomes much cheaper.

3) Drivers can derive and extend the structure to easily
represent
     driver specific states of a BO for a certain GPUVM.

The idea of this abstraction was taken from amdgpu, hence the
credit
for
this idea goes to the developers of amdgpu.

Cc: Christian König 
Signed-off-by: Danilo Krummrich 
---
   drivers/gpu/drm/drm_gpuvm.c    | 335
+--
--
   drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
   include/drm/drm_gem.h  |  32 +--
   include/drm/drm_gpuvm.h    | 188 +-
   4 files changed, 533 insertions(+), 86 deletions(-)


That checkpatch.pl error still remains as well.


I guess you refer to:

ERROR: do not use assignment in if condition
#633: FILE: drivers/gpu/drm/nouveau/nouveau_uvmm.c:1165:
+   if (!(op->gem.obj = obj))

This was an intentional decision, since in this specific case it
seems to
be more readable than the alternatives.

However, if we consider this to be a hard rule, which we never ever
break,
I'm fine changing it too.


With the errors, sooner or later they are going to start generate
patches to "fix" them. In this particular case also Xe CI is
complaining and abort building when I submit the Xe adaptation, so it'd
be good to be checkpatch.pl conformant IMHO.


Ok, I will change this one.

However, in general my opinion on coding style is that we should preserve us
the privilege to deviate from it when we agree it makes sense and improves
the code quality.

Having a CI forcing people to *blindly* follow certain rules and even abort
building isn't very beneficial in that respect.

Also, consider patches which partially change a line of code that already
contains a coding style "issue" - the CI would also block you on that one I
guess. Besides that it seems to block you on unrelated code, note that the
assignment in question is from Nouveau and not from GPUVM.

- Danilo



Thanks,
Thomas








Thanks,
Thomas









Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Danilo Krummrich

On 10/31/23 17:50, Thomas Hellström wrote:

On Tue, 2023-10-31 at 17:30 +0100, Danilo Krummrich wrote:

On 10/31/23 12:45, Jani Nikula wrote:

On Tue, 31 Oct 2023, Thomas Hellström
 wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

+ * Returns: a pointer to the _gpuvm_bo on success, NULL on


Still needs s/Returns:/Return:/g


FWIW, both work to accommodate the variance across the kernel,
although
I think only the latter is documented and recommended. It's also
the
most popular:

    10577 Return
     3596 Returns


I'd like to keep "Returns", since that's what GPUVM uses already
everywhere else.


Ok. It looks like the Returns: are converted to Return in the rendered
output so I guess that's why it's the form that is documented.

I pointed this out since in the last review you replied you were going
to change it, and also when the code starts seeing updates from other,
it might become inconsistent if those patches follow the documented
way.


Sorry for that. I think I wrote this answer when I was at XDC and hence was
a little bit distracted.



But I'm OK either way.


Ok, then let's just keep it as it is.



/Thomas





     1104 RETURN
  568 return
  367 returns
  352 RETURNS
    1 RETURNs

BR,
Jani.










Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Danilo Krummrich

On 10/31/23 11:32, Thomas Hellström wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

Add an abstraction layer between the drm_gpuva mappings of a
particular
drm_gem_object and this GEM object itself. The abstraction represents
a
combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
holds
a list of drm_gpuvm_bo structures (the structure representing this
abstraction), while each drm_gpuvm_bo contains list of mappings of
this
GEM object.

This has multiple advantages:

1) We can use the drm_gpuvm_bo structure to attach it to various
lists
    of the drm_gpuvm. This is useful for tracking external and evicted
    objects per VM, which is introduced in subsequent patches.

2) Finding mappings of a certain drm_gem_object mapped in a certain
    drm_gpuvm becomes much cheaper.

3) Drivers can derive and extend the structure to easily represent
    driver specific states of a BO for a certain GPUVM.

The idea of this abstraction was taken from amdgpu, hence the credit
for
this idea goes to the developers of amdgpu.

Cc: Christian König 
Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/drm_gpuvm.c    | 335 +--
--
  drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
  include/drm/drm_gem.h  |  32 +--
  include/drm/drm_gpuvm.h    | 188 +-
  4 files changed, 533 insertions(+), 86 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index c03332883432..7f4f5919f84c 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -70,6 +70,18 @@
   * _gem_object, such as the _gem_object containing the root
page table,
   * but it can also be a 'dummy' object, which can be allocated with
   * drm_gpuvm_resv_object_alloc().
+ *
+ * In order to connect a struct drm_gpuva its backing
_gem_object each
+ * _gem_object maintains a list of _gpuvm_bo structures, and
each
+ * _gpuvm_bo contains a list of _gpuva structures.
+ *
+ * A _gpuvm_bo is an abstraction that represents a combination
of a
+ * _gpuvm and a _gem_object. Every such combination should
be unique.
+ * This is ensured by the API through drm_gpuvm_bo_obtain() and
+ * drm_gpuvm_bo_obtain_prealloc() which first look into the
corresponding
+ * _gem_object list of _gpuvm_bos for an existing instance
of this
+ * particular combination. If not existent a new instance is created
and linked
+ * to the _gem_object.
   */
  
  /**

@@ -395,21 +407,28 @@
  /**
   * DOC: Locking
   *
- * Generally, the GPU VA manager does not take care of locking
itself, it is
- * the drivers responsibility to take care about locking. Drivers
might want to
- * protect the following operations: inserting, removing and
iterating
- * _gpuva objects as well as generating all kinds of operations,
such as
- * split / merge or prefetch.
- *
- * The GPU VA manager also does not take care of the locking of the
backing
- * _gem_object buffers GPU VA lists by itself; drivers are
responsible to
- * enforce mutual exclusion using either the GEMs dma_resv lock or
alternatively
- * a driver specific external lock. For the latter see also
- * drm_gem_gpuva_set_lock().
- *
- * However, the GPU VA manager contains lockdep checks to ensure
callers of its
- * API hold the corresponding lock whenever the _gem_objects GPU
VA list is
- * accessed by functions such as drm_gpuva_link() or
drm_gpuva_unlink().
+ * In terms of managing _gpuva entries DRM GPUVM does not take
care of
+ * locking itself, it is the drivers responsibility to take care
about locking.
+ * Drivers might want to protect the following operations:
inserting, removing
+ * and iterating _gpuva objects as well as generating all kinds
of
+ * operations, such as split / merge or prefetch.
+ *
+ * DRM GPUVM also does not take care of the locking of the backing
+ * _gem_object buffers GPU VA lists and _gpuvm_bo
abstractions by
+ * itself; drivers are responsible to enforce mutual exclusion using
either the
+ * GEMs dma_resv lock or alternatively a driver specific external
lock. For the
+ * latter see also drm_gem_gpuva_set_lock().
+ *
+ * However, DRM GPUVM contains lockdep checks to ensure callers of
its API hold
+ * the corresponding lock whenever the _gem_objects GPU VA list
is accessed
+ * by functions such as drm_gpuva_link() or drm_gpuva_unlink(), but
also
+ * drm_gpuvm_bo_obtain() and drm_gpuvm_bo_put().
+ *
+ * The latter is required since on creation and destruction of a
_gpuvm_bo
+ * the _gpuvm_bo is attached / removed from the _gem_objects
gpuva list.
+ * Subsequent calls to drm_gpuvm_bo_obtain() for the same _gpuvm
and
+ * _gem_object must be able to observe previous creations and
destructions
+ * of _gpuvm_bos in order to keep instances unique.
   */
  
  /**

@@ -439,6 +458,7 @@
   * {
   * struct drm_gpuva_ops *ops;
   * struct drm_gpuva_op *op
+ * struct drm_gpuvm_bo *vm_bo;
   *
   * driver_lock_va_space();
   * ops = 

Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Thomas Hellström
On Tue, 2023-10-31 at 17:30 +0100, Danilo Krummrich wrote:
> On 10/31/23 12:45, Jani Nikula wrote:
> > On Tue, 31 Oct 2023, Thomas Hellström
> >  wrote:
> > > On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> > > > + * Returns: a pointer to the _gpuvm_bo on success, NULL on
> > > 
> > > Still needs s/Returns:/Return:/g
> > 
> > FWIW, both work to accommodate the variance across the kernel,
> > although
> > I think only the latter is documented and recommended. It's also
> > the
> > most popular:
> > 
> >    10577 Return
> >     3596 Returns
> 
> I'd like to keep "Returns", since that's what GPUVM uses already
> everywhere else.

Ok. It looks like the Returns: are converted to Return in the rendered
output so I guess that's why it's the form that is documented.

I pointed this out since in the last review you replied you were going
to change it, and also when the code starts seeing updates from other,
it might become inconsistent if those patches follow the documented
way.

But I'm OK either way.

/Thomas


> 
> >     1104 RETURN
> >  568 return
> >  367 returns
> >  352 RETURNS
> >    1 RETURNs
> > 
> > BR,
> > Jani.
> > 
> > 
> 



Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Thomas Hellström
On Tue, 2023-10-31 at 17:39 +0100, Danilo Krummrich wrote:
> On 10/31/23 12:25, Thomas Hellström wrote:
> > On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> > > Add an abstraction layer between the drm_gpuva mappings of a
> > > particular
> > > drm_gem_object and this GEM object itself. The abstraction
> > > represents
> > > a
> > > combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
> > > holds
> > > a list of drm_gpuvm_bo structures (the structure representing
> > > this
> > > abstraction), while each drm_gpuvm_bo contains list of mappings
> > > of
> > > this
> > > GEM object.
> > > 
> > > This has multiple advantages:
> > > 
> > > 1) We can use the drm_gpuvm_bo structure to attach it to various
> > > lists
> > >     of the drm_gpuvm. This is useful for tracking external and
> > > evicted
> > >     objects per VM, which is introduced in subsequent patches.
> > > 
> > > 2) Finding mappings of a certain drm_gem_object mapped in a
> > > certain
> > >     drm_gpuvm becomes much cheaper.
> > > 
> > > 3) Drivers can derive and extend the structure to easily
> > > represent
> > >     driver specific states of a BO for a certain GPUVM.
> > > 
> > > The idea of this abstraction was taken from amdgpu, hence the
> > > credit
> > > for
> > > this idea goes to the developers of amdgpu.
> > > 
> > > Cc: Christian König 
> > > Signed-off-by: Danilo Krummrich 
> > > ---
> > >   drivers/gpu/drm/drm_gpuvm.c    | 335
> > > +--
> > > --
> > >   drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
> > >   include/drm/drm_gem.h  |  32 +--
> > >   include/drm/drm_gpuvm.h    | 188 +-
> > >   4 files changed, 533 insertions(+), 86 deletions(-)
> > 
> > That checkpatch.pl error still remains as well.
> 
> I guess you refer to:
> 
> ERROR: do not use assignment in if condition
> #633: FILE: drivers/gpu/drm/nouveau/nouveau_uvmm.c:1165:
> +   if (!(op->gem.obj = obj))
> 
> This was an intentional decision, since in this specific case it
> seems to
> be more readable than the alternatives.
> 
> However, if we consider this to be a hard rule, which we never ever
> break,
> I'm fine changing it too.

With the errors, sooner or later they are going to start generate
patches to "fix" them. In this particular case also Xe CI is
complaining and abort building when I submit the Xe adaptation, so it'd
be good to be checkpatch.pl conformant IMHO.

Thanks,
Thomas




> 
> > 
> > Thanks,
> > Thomas
> > 
> 



Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Danilo Krummrich

On 10/31/23 12:25, Thomas Hellström wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

Add an abstraction layer between the drm_gpuva mappings of a
particular
drm_gem_object and this GEM object itself. The abstraction represents
a
combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
holds
a list of drm_gpuvm_bo structures (the structure representing this
abstraction), while each drm_gpuvm_bo contains list of mappings of
this
GEM object.

This has multiple advantages:

1) We can use the drm_gpuvm_bo structure to attach it to various
lists
    of the drm_gpuvm. This is useful for tracking external and evicted
    objects per VM, which is introduced in subsequent patches.

2) Finding mappings of a certain drm_gem_object mapped in a certain
    drm_gpuvm becomes much cheaper.

3) Drivers can derive and extend the structure to easily represent
    driver specific states of a BO for a certain GPUVM.

The idea of this abstraction was taken from amdgpu, hence the credit
for
this idea goes to the developers of amdgpu.

Cc: Christian König 
Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/drm_gpuvm.c    | 335 +--
--
  drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
  include/drm/drm_gem.h  |  32 +--
  include/drm/drm_gpuvm.h    | 188 +-
  4 files changed, 533 insertions(+), 86 deletions(-)


That checkpatch.pl error still remains as well.


I guess you refer to:

ERROR: do not use assignment in if condition
#633: FILE: drivers/gpu/drm/nouveau/nouveau_uvmm.c:1165:
+   if (!(op->gem.obj = obj))

This was an intentional decision, since in this specific case it seems to
be more readable than the alternatives.

However, if we consider this to be a hard rule, which we never ever break,
I'm fine changing it too.



Thanks,
Thomas





Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Danilo Krummrich

On 10/31/23 12:45, Jani Nikula wrote:

On Tue, 31 Oct 2023, Thomas Hellström  wrote:

On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:

+ * Returns: a pointer to the _gpuvm_bo on success, NULL on


Still needs s/Returns:/Return:/g


FWIW, both work to accommodate the variance across the kernel, although
I think only the latter is documented and recommended. It's also the
most popular:

   10577 Return
3596 Returns


I'd like to keep "Returns", since that's what GPUVM uses already everywhere 
else.


1104 RETURN
 568 return
 367 returns
 352 RETURNS
   1 RETURNs

BR,
Jani.






Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Jani Nikula
On Tue, 31 Oct 2023, Thomas Hellström  wrote:
> On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
>> + * Returns: a pointer to the _gpuvm_bo on success, NULL on
>
> Still needs s/Returns:/Return:/g

FWIW, both work to accommodate the variance across the kernel, although
I think only the latter is documented and recommended. It's also the
most popular:

  10577 Return
   3596 Returns
   1104 RETURN
568 return
367 returns
352 RETURNS
  1 RETURNs

BR,
Jani.


-- 
Jani Nikula, Intel


Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Thomas Hellström
On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> Add an abstraction layer between the drm_gpuva mappings of a
> particular
> drm_gem_object and this GEM object itself. The abstraction represents
> a
> combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
> holds
> a list of drm_gpuvm_bo structures (the structure representing this
> abstraction), while each drm_gpuvm_bo contains list of mappings of
> this
> GEM object.
> 
> This has multiple advantages:
> 
> 1) We can use the drm_gpuvm_bo structure to attach it to various
> lists
>    of the drm_gpuvm. This is useful for tracking external and evicted
>    objects per VM, which is introduced in subsequent patches.
> 
> 2) Finding mappings of a certain drm_gem_object mapped in a certain
>    drm_gpuvm becomes much cheaper.
> 
> 3) Drivers can derive and extend the structure to easily represent
>    driver specific states of a BO for a certain GPUVM.
> 
> The idea of this abstraction was taken from amdgpu, hence the credit
> for
> this idea goes to the developers of amdgpu.
> 
> Cc: Christian König 
> Signed-off-by: Danilo Krummrich 
> ---
>  drivers/gpu/drm/drm_gpuvm.c    | 335 +--
> --
>  drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
>  include/drm/drm_gem.h  |  32 +--
>  include/drm/drm_gpuvm.h    | 188 +-
>  4 files changed, 533 insertions(+), 86 deletions(-)

That checkpatch.pl error still remains as well.

Thanks,
Thomas



Re: [PATCH drm-misc-next v7 4/7] drm/gpuvm: add an abstraction for a VM / BO combination

2023-10-31 Thread Thomas Hellström
On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
> Add an abstraction layer between the drm_gpuva mappings of a
> particular
> drm_gem_object and this GEM object itself. The abstraction represents
> a
> combination of a drm_gem_object and drm_gpuvm. The drm_gem_object
> holds
> a list of drm_gpuvm_bo structures (the structure representing this
> abstraction), while each drm_gpuvm_bo contains list of mappings of
> this
> GEM object.
> 
> This has multiple advantages:
> 
> 1) We can use the drm_gpuvm_bo structure to attach it to various
> lists
>    of the drm_gpuvm. This is useful for tracking external and evicted
>    objects per VM, which is introduced in subsequent patches.
> 
> 2) Finding mappings of a certain drm_gem_object mapped in a certain
>    drm_gpuvm becomes much cheaper.
> 
> 3) Drivers can derive and extend the structure to easily represent
>    driver specific states of a BO for a certain GPUVM.
> 
> The idea of this abstraction was taken from amdgpu, hence the credit
> for
> this idea goes to the developers of amdgpu.
> 
> Cc: Christian König 
> Signed-off-by: Danilo Krummrich 
> ---
>  drivers/gpu/drm/drm_gpuvm.c    | 335 +--
> --
>  drivers/gpu/drm/nouveau/nouveau_uvmm.c |  64 +++--
>  include/drm/drm_gem.h  |  32 +--
>  include/drm/drm_gpuvm.h    | 188 +-
>  4 files changed, 533 insertions(+), 86 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gpuvm.c
> b/drivers/gpu/drm/drm_gpuvm.c
> index c03332883432..7f4f5919f84c 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -70,6 +70,18 @@
>   * _gem_object, such as the _gem_object containing the root
> page table,
>   * but it can also be a 'dummy' object, which can be allocated with
>   * drm_gpuvm_resv_object_alloc().
> + *
> + * In order to connect a struct drm_gpuva its backing
> _gem_object each
> + * _gem_object maintains a list of _gpuvm_bo structures, and
> each
> + * _gpuvm_bo contains a list of _gpuva structures.
> + *
> + * A _gpuvm_bo is an abstraction that represents a combination
> of a
> + * _gpuvm and a _gem_object. Every such combination should
> be unique.
> + * This is ensured by the API through drm_gpuvm_bo_obtain() and
> + * drm_gpuvm_bo_obtain_prealloc() which first look into the
> corresponding
> + * _gem_object list of _gpuvm_bos for an existing instance
> of this
> + * particular combination. If not existent a new instance is created
> and linked
> + * to the _gem_object.
>   */
>  
>  /**
> @@ -395,21 +407,28 @@
>  /**
>   * DOC: Locking
>   *
> - * Generally, the GPU VA manager does not take care of locking
> itself, it is
> - * the drivers responsibility to take care about locking. Drivers
> might want to
> - * protect the following operations: inserting, removing and
> iterating
> - * _gpuva objects as well as generating all kinds of operations,
> such as
> - * split / merge or prefetch.
> - *
> - * The GPU VA manager also does not take care of the locking of the
> backing
> - * _gem_object buffers GPU VA lists by itself; drivers are
> responsible to
> - * enforce mutual exclusion using either the GEMs dma_resv lock or
> alternatively
> - * a driver specific external lock. For the latter see also
> - * drm_gem_gpuva_set_lock().
> - *
> - * However, the GPU VA manager contains lockdep checks to ensure
> callers of its
> - * API hold the corresponding lock whenever the _gem_objects GPU
> VA list is
> - * accessed by functions such as drm_gpuva_link() or
> drm_gpuva_unlink().
> + * In terms of managing _gpuva entries DRM GPUVM does not take
> care of
> + * locking itself, it is the drivers responsibility to take care
> about locking.
> + * Drivers might want to protect the following operations:
> inserting, removing
> + * and iterating _gpuva objects as well as generating all kinds
> of
> + * operations, such as split / merge or prefetch.
> + *
> + * DRM GPUVM also does not take care of the locking of the backing
> + * _gem_object buffers GPU VA lists and _gpuvm_bo
> abstractions by
> + * itself; drivers are responsible to enforce mutual exclusion using
> either the
> + * GEMs dma_resv lock or alternatively a driver specific external
> lock. For the
> + * latter see also drm_gem_gpuva_set_lock().
> + *
> + * However, DRM GPUVM contains lockdep checks to ensure callers of
> its API hold
> + * the corresponding lock whenever the _gem_objects GPU VA list
> is accessed
> + * by functions such as drm_gpuva_link() or drm_gpuva_unlink(), but
> also
> + * drm_gpuvm_bo_obtain() and drm_gpuvm_bo_put().
> + *
> + * The latter is required since on creation and destruction of a
> _gpuvm_bo
> + * the _gpuvm_bo is attached / removed from the _gem_objects
> gpuva list.
> + * Subsequent calls to drm_gpuvm_bo_obtain() for the same _gpuvm
> and
> + * _gem_object must be able to observe previous creations and
> destructions
> + * of _gpuvm_bos in order to keep instances unique.
>   */
>  
>  /**
> @@ -439,6 +458,7 @@
>   *