On 02/18/2016 06:27 AM, john.c.harri...@intel.com wrote:
> From: John Harrison <john.c.harri...@intel.com>
> 
> When the seqno wraps around zero, the entire GPU is forced to be idle
> for some reason (possibly only to work around issues with hardware
> semaphores but no-one seems too sure!). This causes a problem if the
> force idle occurs at an inopportune moment such as in the middle of
> submitting a batch buffer. Specifically, it would lead to recursive
> submits - submitting work requires a new seqno, the new seqno requires
> idling the ring, idling the ring requires submitting work, submitting
> work requires a new seqno...
> 
> This change adds a 'flush' parameter to the idle function call which
> specifies whether the scheduler queues should be flushed out. I.e. is
> the call intended to just idle the ring as it is right now (no flush)
> or is it intended to force all outstanding work out of the system
> (with flush).
> 
> In the seqno wrap case, pending work is not an issue because the next
> operation will be to submit it. However, in other cases, the intention
> is to make sure everything that could be done has been done.
> 
> For: VIZ-1587
> Signed-off-by: John Harrison <john.c.harri...@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem.c         |  4 ++--
>  drivers/gpu/drm/i915/intel_lrc.c        |  2 +-
>  drivers/gpu/drm/i915/intel_ringbuffer.c | 17 +++++++++++++++--
>  drivers/gpu/drm/i915/intel_ringbuffer.h |  2 +-
>  4 files changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index d7f7f7a..a249e52 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2564,7 +2564,7 @@ i915_gem_init_seqno(struct drm_device *dev, u32 seqno)
>  
>       /* Carefully retire all requests without writing to the rings */
>       for_each_ring(ring, dev_priv, i) {
> -             ret = intel_ring_idle(ring);
> +             ret = intel_ring_idle(ring, false);
>               if (ret)
>                       return ret;
>       }
> @@ -3808,7 +3808,7 @@ int i915_gpu_idle(struct drm_device *dev)
>                       i915_add_request_no_flush(req);
>               }
>  
> -             ret = intel_ring_idle(ring);
> +             ret = intel_ring_idle(ring, true);
>               if (ret)
>                       return ret;
>       }
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c 
> b/drivers/gpu/drm/i915/intel_lrc.c
> index f4bab82..e056875 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1058,7 +1058,7 @@ void intel_logical_ring_stop(struct intel_engine_cs 
> *ring)
>       if (!intel_ring_initialized(ring))
>               return;
>  
> -     ret = intel_ring_idle(ring);
> +     ret = intel_ring_idle(ring, true);
>       if (ret && !i915_reset_in_progress(&to_i915(ring->dev)->gpu_error))
>               DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
>                         ring->name, ret);
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c 
> b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index a2093f5..70ef9f0 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -2288,9 +2288,22 @@ static void __wrap_ring_buffer(struct intel_ringbuffer 
> *ringbuf)
>       intel_ring_update_space(ringbuf);
>  }
>  
> -int intel_ring_idle(struct intel_engine_cs *ring)
> +int intel_ring_idle(struct intel_engine_cs *ring, bool flush)
>  {
>       struct drm_i915_gem_request *req;
> +     int ret;
> +
> +     /*
> +      * NB: Must not flush the scheduler if this idle request is from
> +      * within an execbuff submission (i.e. due to 'get_seqno' calling
> +      * 'wrap_seqno' calling 'idle'). As that would lead to recursive
> +      * flushes!
> +      */
> +     if (flush) {
> +             ret = i915_scheduler_flush(ring, true);
> +             if (ret)
> +                     return ret;
> +     }
>  
>       /* Wait upon the last request to be completed */
>       if (list_empty(&ring->request_list))
> @@ -3095,7 +3108,7 @@ intel_stop_ring_buffer(struct intel_engine_cs *ring)
>       if (!intel_ring_initialized(ring))
>               return;
>  
> -     ret = intel_ring_idle(ring);
> +     ret = intel_ring_idle(ring, true);
>       if (ret && !i915_reset_in_progress(&to_i915(ring->dev)->gpu_error))
>               DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
>                         ring->name, ret);
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h 
> b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index ada93a9..cca476f 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -478,7 +478,7 @@ void intel_ring_update_space(struct intel_ringbuffer 
> *ringbuf);
>  int intel_ring_space(struct intel_ringbuffer *ringbuf);
>  bool intel_ring_stopped(struct intel_engine_cs *ring);
>  
> -int __must_check intel_ring_idle(struct intel_engine_cs *ring);
> +int __must_check intel_ring_idle(struct intel_engine_cs *ring, bool flush);
>  void intel_ring_init_seqno(struct intel_engine_cs *ring, u32 seqno);
>  int intel_ring_flush_all_caches(struct drm_i915_gem_request *req);
>  int intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req);
> 

Maybe Chris remembers the history here; I think the wraparound idle goes all 
the way back to Eric's original work with wrapping (something we had a lot of 
trouble with early on iirc).

My only suggestion here is to add wrappers for a new __intel_ring_idle(ring, 
bool), one for the existing usage of intel_ring_idle(ring) (which would pass 
false) and a new intel_ring_flush(ring) that passes true, along with some kdoc 
explaining the difference.  Otherwise everyone will have to look up the param 
all the time. :)

With that change (because I'm a bool param hater, at least in exposed APIs):
Reviewed-by: Jesse Barnes <jbar...@virtuousgeek.org>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to