Keith Whitwell wrote:
> Dave Airlie wrote:
>>> I'm trying to figure out how context switches acutally work... the DRI
>>> lock is overloaded as context switcher, and there is code in the
>>> kernel to call out to a chipset specific context switch routine when
>>> the DRI lock is taken... but only ffb uses it... So I'm guessing the
>>> way context switches work today is that the DRI driver grabs the lock
>>> and after potentially updating the cliprects through X protocol, it
>>> emits all the state it depends on to the cards.  Is the state emission
>>> done by just writing out a bunch of registers?  Is this how the X
>>> server works too, except XAA/EXA acceleration doesn't depend on a lot
>>> of state and thus the DDX driver can emit everything for each
>>> operation?
>> So yes userspaces notices context has changed and just re-emits everything 
>> into the batchbuffer it is going to send, for XAA/EXA stuff in Intel at 
>> least there is an invarient state emission functions that notices what the 
>> context was and what the last server 3D users was (EXA or Xv texturing) 
>> and just dumps the state into the batchbuffer.. (or currently into the 
>> ring)
>>
>>> How would this work if we didn't have a lock?  You can't emit the
>>> state and then start rendering without a lock to keep the state in
>>> place...  If the kernel doesn't restore any state, whats the point of
>>> the drm_context_t we pass to the kernel in drmLock?  Should the kernel
>>> know how to restore state (this ties in to the email from jglisse on
>>> state tracking in drm and all the gallium jazz, I guess)?  How do we
>>> identify state to the kernel, and how do we pass it in in the
>>> super-ioctl?  Can we add a list of registers to be written and the
>>> values?  I talked to Dave about it and we agreed that adding a
>>> drm_context_t to the super-ioctl would work, but now I'm thinking if
>>> the kernel doesn't track any state it wont really work.  Maybe
>>> cross-client state sharing isn't useful for performance as Keith and
>>> Roland argues, but if the kernel doesn't restore state when it sees a
>>> super-ioctl coming from a different context, who does?
>>>
>> My guess for one way is to store a buffer object with the current state 
>> emission in it, and submit it with the superioctl maybe, and if we have 
>> lost context emit it before the batchbuffer..
> 
> The way drivers actually work at the moment is to emit a full state as a 
> preamble to each batchbuffer.  Depending on the hardware, this can be 
> pretty low overhead, and it seems that the trend in hardware is to make 
> this operation cheaper and cheaper.  This works fine without the lock.
> 
> There is another complimentary trend to support one way or another 
> multiple hardware contexts (obviously nvidia have done this for years), 
> meaning that effectively the hardware (effectively) does the context 
> switches.  This is probably how most cards will end up working in the 
> future, if not already.
> 
> Neither of these need a lock for detecting context switches.
> 
> Keith
> 

I will go this way too for r300/r400/r500 there is not so much registers
change with different contexts and registers which need special treatment
will be handled by the drm (this boils down to where 3d get rendered and
where is the zbuffer and pitch/tile informations on this 2 buffers; this
informations will be embedded in drm_drawable as the cliprect if i am
right :)). It will be up to client to reemit enough state for the card
to be in good shape for its rendering and i don't think it's worthwhile
to provide facilities to keep hw in a given state. So i don't need a lock
and indeed my actual code doesn't use any except for ring buffer emission
(only shared area among different client i can see in my case).

Cheers,
Jerome Glisse

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to