On Tue, Jul 01, 2014 at 11:17:48AM -0700, Ben Widawsky wrote:
> If a VM still have objects which are bound (exactly: have a node
> reserved in the drm_mm), and we are in the middle of a reset, we have no
> hope of the standard methods fixing the situation (ring idle won't
> work). We must therefore let the reset handler take it's course, and
> then we can resume tearing down the VM.
> 
> This logic very much duplicates^Wresembles the logic in our wait for
> error code. I've decided to leave it as open coded because I expect this
> bit of code to require tweaks and changes over time.
> 
> Interruptions via signal causes a really similar problem.
> 
> This should deprecate the need for the yet unmerged patch from Chris
> (and an identical patch from me, which was first!!):
> drm/i915: Prevent signals from interrupting close()
> 
> I have a followup patch to implement deferred free, before you complain.
> 
> Signed-off-by: Ben Widawsky <b...@bwidawsk.net>

Imo this goes in the wrong direction. ppgtt_cleanup really shouldn't ever
have a need to wait for the gpu. We need to rework the lifetimes such that
we keep the ppgtt alive until the gpu is done with it. Similarly to how we
keep the objects themselves around when the gpu is still using them. Even
when userspace has already dropped the last reference.

Having such a stark behaviour difference between ppgtt lifetimes and
object lifetimes only leads to unecessary complexity and fragility in the
code. And this patch here is a good example for this.
-Daniel

> ---
>  drivers/gpu/drm/i915/i915_gem_context.c | 51 
> +++++++++++++++++++++++++++++++--
>  1 file changed, 48 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c 
> b/drivers/gpu/drm/i915/i915_gem_context.c
> index 8d106d9..e1b5613 100644
> --- a/drivers/gpu/drm/i915/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> @@ -101,6 +101,32 @@ static void do_ppgtt_cleanup(struct i915_hw_ppgtt *ppgtt)
>       struct drm_device *dev = ppgtt->base.dev;
>       struct drm_i915_private *dev_priv = dev->dev_private;
>       struct i915_address_space *vm = &ppgtt->base;
> +     bool do_idle = false;
> +     int ret;
> +
> +     /* If we get here while in reset, we need to let the reset handler run
> +      * first, or else our VM teardown isn't going to go smoothly. There are
> +      * a could of options at this point, but letting the reset handler do
> +      * it's thing is the most desirable. The reset handler will take care of
> +      * retiring the stuck requests.
> +      */
> +     if (i915_reset_in_progress(&dev_priv->gpu_error)) {
> +             mutex_unlock(&dev->struct_mutex);
> +#define EXIT_COND (!i915_reset_in_progress(&dev_priv->gpu_error) || \
> +                i915_terminally_wedged(&dev_priv->gpu_error))
> +             ret = wait_event_timeout(dev_priv->gpu_error.reset_queue,
> +                                      EXIT_COND,
> +                                      10 * HZ);
> +             if (!ret) {
> +                     /* it's unlikely idling will solve anything, but it
> +                      * shouldn't hurt to try. */
> +                     do_idle = true;
> +                     /* TODO: go down kicking and screaming harder */
> +             }
> +#undef EXIT_COND
> +
> +             mutex_lock(&dev->struct_mutex);
> +     }
>  
>       if (ppgtt == dev_priv->mm.aliasing_ppgtt ||
>           (list_empty(&vm->active_list) && list_empty(&vm->inactive_list))) {
> @@ -117,14 +143,33 @@ static void do_ppgtt_cleanup(struct i915_hw_ppgtt 
> *ppgtt)
>       if (!list_empty(&vm->active_list)) {
>               struct i915_vma *vma;
>  
> +             do_idle = true;
>               list_for_each_entry(vma, &vm->active_list, mm_list)
>                       if (WARN_ON(list_empty(&vma->vma_link) ||
>                                   list_is_singular(&vma->vma_link)))
>                               break;
> -             i915_gem_evict_vm(&ppgtt->base, true, true);
> -     } else {
> +     } else
>               i915_gem_retire_requests(dev);
> -             i915_gem_evict_vm(&ppgtt->base, false, true);
> +
> +     /* We have a problem here where VM teardown cannot be interrupted, or
> +      * else the ppgtt cleanup will fail. As an example, a precisely timed
> +      * SIGKILL could leads to an OOPS, or worse. There are two options:
> +      * 1. Make the eviction uninterruptible
> +      * 2. Defer the eviction if it was interrupted.
> +      *
> +      * Option #1 is not the friendliest, but it's the easiest to implement,
> +      * and least error prone.
> +      * TODO: Implement option 2
> +      */
> +     ret = i915_gem_evict_vm(&ppgtt->base, do_idle, !do_idle);
> +     if (ret == -ERESTARTSYS)
> +             ret = i915_gem_evict_vm(&ppgtt->base, do_idle, false);
> +     WARN_ON(ret);
> +     WARN_ON(!list_empty(&vm->active_list));
> +
> +     /* This is going to blow up badly if the mm is unclean */
> +     if (WARN_ON(!list_empty(&ppgtt->base.mm.head_node.node_list))) {
> +             /* TODO: go down kicking and screaming harder++ */
>       }
>  
>       ppgtt->base.cleanup(&ppgtt->base);
> -- 
> 2.0.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to