Re: [PATCH v3 6/7] drm/i915: Use vma resources for async unbinding

2021-12-21 Thread Matthew Auld

On 21/12/2021 16:07, Thomas Hellström wrote:

On Tue, 2021-12-21 at 14:02 +, Matthew Auld wrote:

On 17/12/2021 14:52, Thomas Hellström wrote:

Implement async (non-blocking) unbinding by not syncing the vma
before
calling unbind on the vma_resource.
Add the resulting unbind fence to the object's dma_resv from where
it is
picked up by the ttm migration code.
Ideally these unbind fences should be coalesced with the migration
blit
fence to avoid stalling the migration blit waiting for unbind, as
they
can certainly go on in parallel, but since we don't yet have a
reasonable data structure to use to coalesce fences and attach the
resulting fence to a timeline, we defer that for now.

Note that with async unbinding, even while the unbind waits for the
preceding bind to complete before unbinding, the vma itself might
have been
destroyed in the process, clearing the vma pages. Therefore we can
only allow async unbinding if we have a refcounted sg-list and keep
a
refcount on that for the vma resource pages to stay intact until
binding occurs. If this condition is not met, a request for an
async
unbind is diverted to a sync unbind.

v2:
- Use a separate kmem_cache for vma resources for now to isolate
their
    memory allocation and aid debugging.
- Move the check for vm closed to the actual unbinding thread.
Regardless
    of whether the vm is closed, we need the unbind fence to
properly wait
    for capture.
- Clear vma_res::vm on unbind and update its documentation.

Signed-off-by: Thomas Hellström 





@@ -416,6 +420,7 @@ int i915_vma_bind(struct i915_vma *vma,
   {
 u32 bind_flags;
 u32 vma_flags;
+   int ret;
   
 lockdep_assert_held(&vma->vm->mutex);

 GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
@@ -424,12 +429,12 @@ int i915_vma_bind(struct i915_vma *vma,
 if (GEM_DEBUG_WARN_ON(range_overflows(vma->node.start,
   vma->node.size,
   vma->vm->total))) {
-   kfree(vma_res);
+   i915_vma_resource_free(vma_res);
 return -ENODEV;
 }
   
 if (GEM_DEBUG_WARN_ON(!flags)) {

-   kfree(vma_res);
+   i915_vma_resource_free(vma_res);
 return -EINVAL;
 }
   
@@ -441,12 +446,30 @@ int i915_vma_bind(struct i915_vma *vma,
   
 bind_flags &= ~vma_flags;

 if (bind_flags == 0) {
-   kfree(vma_res);
+   i915_vma_resource_free(vma_res);
 return 0;
 }
   
 GEM_BUG_ON(!vma->pages);
   
+   /* Wait for or await async unbinds touching our range */

+   if (work && bind_flags & vma->vm->bind_async_flags)
+   ret = i915_vma_resource_bind_dep_await(vma->vm,
+  &work-

base.chain,

+  vma-

node.start,

+  vma-

node.size,

+  true,
+  GFP_NOWAIT |
+
__GFP_RETRY_MAYFAIL |
+
__GFP_NOWARN);
+   else
+   ret = i915_vma_resource_bind_dep_sync(vma->vm, vma-

node.start,

+ vma-

node.size, true);


Is there nothing scary here with coloring? Say with cache coloring,
to
ensure we unbind the neighbouring nodes(if they are conflicting)
before
doing the bind, or is async unbinding only ever going to be used for
the
ppGTT?

And then I guess there might also be memory coloring where we likely
need to ensure that all the unbinds within the overlapping PT(s) have
been completed before doing the bind, since the bind will also
increment
the usage count of the PT, potentially preventing it from being
destroyed, which will skip nuking the PDE state, AFAIK. Previously
the
drm_mm node(s) would still be present, which would trigger the
eviction.
Although it might be that we just end up aligning everything to 2M,
and
so drop the memory coloring anyway, so maybe no need to worry about
this
yet...


Hmm. This indeed sounds that there were some important considerations
left out. I was under the impression that only previously scheduled
unbinds touching the same range would have need to have finished.

Currently there's only ppGTT async unbinding, but I figure moving
forward we don't want to restrict it.

I wonder whether instead of keeping an interval tree of pending unbinds
we should keep just a single fence per VM of the last pending unbind,
and move to the RB tree as a separate optimization step if needed. That
would AFAICT keep the current semantics of all unbinds that were
scheduled before the current bind are completed before the bind. Do you
think that would be sufficient?


Single fence should work I think. Or alternatively keep the interval 
tree and then add a 4K chunk at the beginning and end of t

Re: [PATCH v3 6/7] drm/i915: Use vma resources for async unbinding

2021-12-21 Thread Thomas Hellström
On Tue, 2021-12-21 at 14:02 +, Matthew Auld wrote:
> On 17/12/2021 14:52, Thomas Hellström wrote:
> > Implement async (non-blocking) unbinding by not syncing the vma
> > before
> > calling unbind on the vma_resource.
> > Add the resulting unbind fence to the object's dma_resv from where
> > it is
> > picked up by the ttm migration code.
> > Ideally these unbind fences should be coalesced with the migration
> > blit
> > fence to avoid stalling the migration blit waiting for unbind, as
> > they
> > can certainly go on in parallel, but since we don't yet have a
> > reasonable data structure to use to coalesce fences and attach the
> > resulting fence to a timeline, we defer that for now.
> > 
> > Note that with async unbinding, even while the unbind waits for the
> > preceding bind to complete before unbinding, the vma itself might
> > have been
> > destroyed in the process, clearing the vma pages. Therefore we can
> > only allow async unbinding if we have a refcounted sg-list and keep
> > a
> > refcount on that for the vma resource pages to stay intact until
> > binding occurs. If this condition is not met, a request for an
> > async
> > unbind is diverted to a sync unbind.
> > 
> > v2:
> > - Use a separate kmem_cache for vma resources for now to isolate
> > their
> >    memory allocation and aid debugging.
> > - Move the check for vm closed to the actual unbinding thread.
> > Regardless
> >    of whether the vm is closed, we need the unbind fence to
> > properly wait
> >    for capture.
> > - Clear vma_res::vm on unbind and update its documentation.
> > 
> > Signed-off-by: Thomas Hellström 
> 
> 
> 
> > @@ -416,6 +420,7 @@ int i915_vma_bind(struct i915_vma *vma,
> >   {
> > u32 bind_flags;
> > u32 vma_flags;
> > +   int ret;
> >   
> > lockdep_assert_held(&vma->vm->mutex);
> > GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
> > @@ -424,12 +429,12 @@ int i915_vma_bind(struct i915_vma *vma,
> > if (GEM_DEBUG_WARN_ON(range_overflows(vma->node.start,
> >   vma->node.size,
> >   vma->vm->total))) {
> > -   kfree(vma_res);
> > +   i915_vma_resource_free(vma_res);
> > return -ENODEV;
> > }
> >   
> > if (GEM_DEBUG_WARN_ON(!flags)) {
> > -   kfree(vma_res);
> > +   i915_vma_resource_free(vma_res);
> > return -EINVAL;
> > }
> >   
> > @@ -441,12 +446,30 @@ int i915_vma_bind(struct i915_vma *vma,
> >   
> > bind_flags &= ~vma_flags;
> > if (bind_flags == 0) {
> > -   kfree(vma_res);
> > +   i915_vma_resource_free(vma_res);
> > return 0;
> > }
> >   
> > GEM_BUG_ON(!vma->pages);
> >   
> > +   /* Wait for or await async unbinds touching our range */
> > +   if (work && bind_flags & vma->vm->bind_async_flags)
> > +   ret = i915_vma_resource_bind_dep_await(vma->vm,
> > +  &work-
> > >base.chain,
> > +  vma-
> > >node.start,
> > +  vma-
> > >node.size,
> > +  true,
> > +  GFP_NOWAIT |
> > + 
> > __GFP_RETRY_MAYFAIL |
> > + 
> > __GFP_NOWARN);
> > +   else
> > +   ret = i915_vma_resource_bind_dep_sync(vma->vm, vma-
> > >node.start,
> > + vma-
> > >node.size, true);
> 
> Is there nothing scary here with coloring? Say with cache coloring,
> to 
> ensure we unbind the neighbouring nodes(if they are conflicting)
> before 
> doing the bind, or is async unbinding only ever going to be used for
> the 
> ppGTT?
> 
> And then I guess there might also be memory coloring where we likely 
> need to ensure that all the unbinds within the overlapping PT(s) have
> been completed before doing the bind, since the bind will also
> increment 
> the usage count of the PT, potentially preventing it from being 
> destroyed, which will skip nuking the PDE state, AFAIK. Previously
> the 
> drm_mm node(s) would still be present, which would trigger the
> eviction. 
> Although it might be that we just end up aligning everything to 2M,
> and 
> so drop the memory coloring anyway, so maybe no need to worry about
> this 
> yet...

Hmm. This indeed sounds that there were some important considerations
left out. I was under the impression that only previously scheduled
unbinds touching the same range would have need to have finished.

Currently there's only ppGTT async unbinding, but I figure moving
forward we don't want to restrict it.

I wonder whether instead of keeping an interval tree of pending unb

Re: [PATCH v3 6/7] drm/i915: Use vma resources for async unbinding

2021-12-21 Thread Matthew Auld

On 17/12/2021 14:52, Thomas Hellström wrote:

Implement async (non-blocking) unbinding by not syncing the vma before
calling unbind on the vma_resource.
Add the resulting unbind fence to the object's dma_resv from where it is
picked up by the ttm migration code.
Ideally these unbind fences should be coalesced with the migration blit
fence to avoid stalling the migration blit waiting for unbind, as they
can certainly go on in parallel, but since we don't yet have a
reasonable data structure to use to coalesce fences and attach the
resulting fence to a timeline, we defer that for now.

Note that with async unbinding, even while the unbind waits for the
preceding bind to complete before unbinding, the vma itself might have been
destroyed in the process, clearing the vma pages. Therefore we can
only allow async unbinding if we have a refcounted sg-list and keep a
refcount on that for the vma resource pages to stay intact until
binding occurs. If this condition is not met, a request for an async
unbind is diverted to a sync unbind.

v2:
- Use a separate kmem_cache for vma resources for now to isolate their
   memory allocation and aid debugging.
- Move the check for vm closed to the actual unbinding thread. Regardless
   of whether the vm is closed, we need the unbind fence to properly wait
   for capture.
- Clear vma_res::vm on unbind and update its documentation.

Signed-off-by: Thomas Hellström 





@@ -416,6 +420,7 @@ int i915_vma_bind(struct i915_vma *vma,
  {
u32 bind_flags;
u32 vma_flags;
+   int ret;
  
  	lockdep_assert_held(&vma->vm->mutex);

GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
@@ -424,12 +429,12 @@ int i915_vma_bind(struct i915_vma *vma,
if (GEM_DEBUG_WARN_ON(range_overflows(vma->node.start,
  vma->node.size,
  vma->vm->total))) {
-   kfree(vma_res);
+   i915_vma_resource_free(vma_res);
return -ENODEV;
}
  
  	if (GEM_DEBUG_WARN_ON(!flags)) {

-   kfree(vma_res);
+   i915_vma_resource_free(vma_res);
return -EINVAL;
}
  
@@ -441,12 +446,30 @@ int i915_vma_bind(struct i915_vma *vma,
  
  	bind_flags &= ~vma_flags;

if (bind_flags == 0) {
-   kfree(vma_res);
+   i915_vma_resource_free(vma_res);
return 0;
}
  
  	GEM_BUG_ON(!vma->pages);
  
+	/* Wait for or await async unbinds touching our range */

+   if (work && bind_flags & vma->vm->bind_async_flags)
+   ret = i915_vma_resource_bind_dep_await(vma->vm,
+  &work->base.chain,
+  vma->node.start,
+  vma->node.size,
+  true,
+  GFP_NOWAIT |
+  __GFP_RETRY_MAYFAIL |
+  __GFP_NOWARN);
+   else
+   ret = i915_vma_resource_bind_dep_sync(vma->vm, vma->node.start,
+ vma->node.size, true);


Is there nothing scary here with coloring? Say with cache coloring, to 
ensure we unbind the neighbouring nodes(if they are conflicting) before 
doing the bind, or is async unbinding only ever going to be used for the 
ppGTT?


And then I guess there might also be memory coloring where we likely 
need to ensure that all the unbinds within the overlapping PT(s) have 
been completed before doing the bind, since the bind will also increment 
the usage count of the PT, potentially preventing it from being 
destroyed, which will skip nuking the PDE state, AFAIK. Previously the 
drm_mm node(s) would still be present, which would trigger the eviction. 
Although it might be that we just end up aligning everything to 2M, and 
so drop the memory coloring anyway, so maybe no need to worry about this 
yet...