Quoting Chris Wilson (2017-08-21 10:28:16)
> Quoting Mika Kuoppala (2017-08-21 10:17:52)
> > Chris Wilson <ch...@chris-wilson.co.uk> writes:
> > 
> > > During suspend we want to flush out all active contexts and their
> > > rendering. To do so we queue a request from the kernel's context, once
> > > we know that request is done, we know the GPU is completely idle. To
> > > speed up that switch bump the GPU clocks.
> > >
> > > Switching to the kernel context prior to idling is also used to enforce
> > > a barrier before changing OA properties, and when evicting active
> > > rendering from the global GTT. All cases where we do want to
> > > race-to-idle.
> > >
> > > Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
> > > Cc: David Weinehall <david.weineh...@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/i915_gem_context.c | 11 ++++++++---
> > >  1 file changed, 8 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/i915_gem_context.c 
> > > b/drivers/gpu/drm/i915/i915_gem_context.c
> > > index 58a2a44f88bd..ca1423ad2708 100644
> > > --- a/drivers/gpu/drm/i915/i915_gem_context.c
> > > +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> > > @@ -895,6 +895,7 @@ int i915_gem_switch_to_kernel_context(struct 
> > > drm_i915_private *dev_priv)
> > >  
> > >       for_each_engine(engine, dev_priv, id) {
> > >               struct drm_i915_gem_request *req;
> > > +             bool active = false;
> > >               int ret;
> > >  
> > >               if (engine_has_kernel_context(engine))
> > > @@ -913,13 +914,17 @@ int i915_gem_switch_to_kernel_context(struct 
> > > drm_i915_private *dev_priv)
> > >                       prev = i915_gem_active_raw(&tl->last_request,
> > >                                                  
> > > &dev_priv->drm.struct_mutex);
> > >                       if (prev)
> > > -                             
> > > i915_sw_fence_await_sw_fence_gfp(&req->submit,
> > > -                                                              
> > > &prev->submit,
> > > -                                                              
> > > GFP_KERNEL);
> > > +                             active |= 
> > > i915_sw_fence_await_sw_fence_gfp(&req->submit,
> > > +                                                                        
> > > &prev->submit,
> > > +                                                                        
> > > GFP_KERNEL) > 0;
> > 
> > There is no point of kicking the clocks if we are the only request left?
> > 
> > Well logical as the request is empty, just pondering if the actual ctx
> > save/restore would finish quicker.
> 
> I was thinking if it was just the context save itself, it not would be
> enough of a difference to justify itself. Just gut feeling and not
> measured, I worry about the irony of boosting from idle just to idle.

Hmm, or we could be more precise and just set the clocks high rather
than queue a task. The complication isn't worth it for just a single
callsite, but I am contemplating supplying boost/clocks information
along with the request.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to