Stephane Marchesin wrote:
> 
> 
> On 11/28/07, *Keith Whitwell* <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
> 
>     Stephane Marchesin wrote:
>      >
>      >
>      > On 28 Nov 2007 06:19:39 +0100, *Soeren Sandmann*
>     <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
>      > <mailto: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>> wrote:
>      >
>      >     "Stephane Marchesin" <[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>
>      >     <mailto: [EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>>> writes:
>      >
>      >      > I fail to see how this works with a lockless design. How
>     do you
>      >     ensure the X
>      >      > server doesn't change cliprects between the time it has
>     written
>      >     those in the
>      >      > shared ring buffer and the time the DRI client picks them
>     up and
>      >     has the
>      >      > command fired and actually executed ? Do you lock out the
>     server
>      >     during that
>      >      > time ?
>      >
>      >     The scheme I have been advocating is this:
>      >
>      >     - A new extension is added to the X server, with a
>      >       PixmapFromBufferObject request.
>      >
>      >     - Clients render into a private back buffer object, for which
>     they
>      >       used the new extension to generate a pixmap.
>      >
>      >     - When a client wishes to copy something to the frontbuffer (for
>      >       whatever reason - glXSwapBuffers(), glCopyPixels(), etc),
>     it uses
>      >       plain old XCopyArea() with the generated pixmap. The X
>     server is
>      >       then responsible for any clipping necessary.
>      >
>      >     This scheme puts all clip list management in the X server. No
>      >     cliprects in shared memory or in the kernel would be
>     required. And no
>      >     locking is required since the X server is already processing
>     requests
>      >     in sequence.
>      >
>      >
>      > Yes, that is the idea I want to do for nvidia hardware.
>      > Although I'm not sure if we can/want to implement it in term of X
>      > primitives or a new X extension.
>      >
>      >
>      >     To synchronize with vblank, a new SYNC counter is introduced that
>      >     records the number of vblanks since some time in the past.
>     The clients
>      >     can then issue SyncAwait requests before any copy they want
>      >     synchronized with vblank. This allows the client to do useful
>      >     processing while it waits, which I don't believe is the case
>     now.
>      >
>      >
>      > Since we can put a "wait until vblank on crtc #X" command to a
>     fifo on
>      > nvidia hardware, the vblank issue is non-existent for us. We get
>     precise
>      > vblank without CPU intervention.
> 
>     You still have some issues...
> 
>     The choice is: do you put the wait-until-vblank command in the same fifo
>     as the X server rendering or not?
> 
>     If yes -- you end up with nasty latency for X as its rendering is
>     blocked by swapbuffers.
> 
> 
> Yes, I want to go for that simpler approach first and see if the 
> blocking gets bad (I can't really say until I've tried).

I'm all for experiments such as this.

Although I have a strong belief how it will turn out, nothing is better 
at changing these sorts of beliefs than actual results.

Keith

-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to