On Fri, Oct 09, 2015 at 07:18:21PM +0200, Daniel Vetter wrote:
> On Fri, Oct 09, 2015 at 10:45:35AM +0100, Chris Wilson wrote:
> > On Fri, Oct 09, 2015 at 11:15:08AM +0200, Daniel Vetter wrote:
> > > My idea was to create a new request for 3. which gets signalled by the
> > > scheduler in intel_lrc_irq_handler. My idea was that we'd only create
> > > these when a ctx switch might occur to avoid overhead, but I guess if we
> > > just outright delay all requests a notch if need that might work too. But
> > > I'm really not sure on the implications of that (i.e. does the hardware
> > > really unlod the ctx if it's idle?), and whether that would fly still with
> > > the scheduler.
> > >
> > > But figuring this one out here seems to be the cornestone of this reorg.
> > > Without it we can't just throw contexts onto the active list.
> > 
> > (Let me see if I understand it correctly)
> > 
> > Basically the problem is that we can't trust the context object to be
> > synchronized until after the status interrupt. The way we handled that
> > for legacy is to track the currently bound context and keep the
> > vma->pin_count asserted until the request containing the switch away.
> > Doing the same for execlists would trivially fix the issue and if done
> > smartly allows us to share more code (been there, done that).
> > 
> > That satisfies me for keeping requests as a basic fence in the GPU
> > timeline and should keep everyone happy that the context can't vanish
> > until after it is complete. The only caveat is that we cannot evict the
> > most recent context. For legacy, we do a switch back to the always
> > pinned default context. For execlists we don't, but it still means we
> > should only have one context which cannot be evicted (like legacy). But
> > it does leave us with the issue that i915_gpu_idle() returns early and
> > i915_gem_context_fini() must keep the explicit gpu reset to be
> > absolutely sure that the pending context writes are completed before the
> > final context is unbound.
> 
> Yes, and that was what I originally had in mind. Meanwhile the scheduler
> (will) happen and that means we won't have FIFO ordering. Which means when
> we switch contexts (as opposed to just adding more to the ringbuffer of
> the current one) we won't have any idea which context will be the next
> one. Which also means we don't know which request to pick to retire the
> old context. Hence why I think we need to be better.

But the scheduler does - it is also in charge of making sure the
retirement queue is in order. The essence is that we only actually pin
engine->last_context, which is chosen as we submit stuff to the hw.
 
> Of course we can first implement the legacy ctx scheme and then let John
> Harrison deal with the mess again, but he might not like that too much ;-)
> 
> The other upside of tracking the real ctx-no-longer-in-use with the ctx
> itself is that we don't need to pin anything ever (I think), at least
> conceptually. But decidedly less sure about that ...

Right. There's still the reservation phase, but after that the pin just
tracks the hw.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to