On Wed, Nov 28, 2007 at 12:43:41AM +0100, Stephane Marchesin wrote:
> On 11/27/07, Kristian Høgsberg <[EMAIL PROTECTED]> wrote:
> >
> > On Nov 27, 2007 11:48 AM, Stephane Marchesin
> > <[EMAIL PROTECTED]> wrote:
> > > On 11/22/07, Kristian Høgsberg <[EMAIL PROTECTED]> wrote:
> > ...
> > > > It's all delightfully simple, but I'm starting to reconsider whether
> > > > the "lockless" bullet point is realistic.   Note, the drawable lock is
> > > > gone, we always render to private back buffers and do swap buffers in
> > > > the kernel, so I'm "only" concerned with the DRI lock here.  The idea
> > > > is that since we have the memory manager and the super-ioctl and the X
> > > > server now can push cliprects into the kernel in one atomic operation,
> > > > we would be able to get rid of the DRI lock.  My overall question,
> > > > here is, is that feasible?
> > >
> > > How do you plan to ensure that X didn't change the cliprects after you
> > > emitted them to the DRM ?
> >
> > The idea was that the buffer swap happens in the kernel, triggered by
> > an ioctl. The kernel generates the command stream to execute the swap
> > against the current set of cliprects.  The back buffers are always
> > private so the cliprects only come into play when copying from the
> > back buffer to the front buffer.  Single buffered visuals are secretly
> > double buffered and implemented the same way.
> 
> 
>  What if cliprects change between the time you emit them to the fifo and the
> time the blit gets executed by the card ? Do you sync to the card in-drm,
> effectively killing performance ?
> 
> I'm trying to figure now whether it makes more sense to keep cliprects
> > and swapbuffer out of the kernel, which wouldn't change the above
> > much, except the swapbuffer case.  I described the idea for swapbuffer
> > in this case in my reply to Thomas: the X server publishes cliprects
> > to the clients through a shared ring buffer, and clients parse the
> > clip rect changes out of this buffer as they need it.  When posting a
> > swap buffer request, the buffer head should be included in the
> > super-ioctl so that the kernel can reject stale requests.  When that
> > happens, the client must parse the new cliprect info and resubmit an
> > updated swap buffer request.
> 
> 
> I fail to see how this works with a lockless design. How do you ensure the X
> server doesn't change cliprects between the time it has written those in the
> shared ring buffer and the time the DRI client picks them up and has the
> command fired and actually executed ? Do you lock out the server during that
> time ?
> 
> Stephane

All this is starting to confuse me a bit :) So, if i am right, all
client render to private buffer and the you want to blit this client
into the front buffer. In composite world it's up to the compositor
to do this and so you don't have to care about cliprect.

I think we can have somekind of dumb default compositor that would
handle this blit directly into the X server. And this compositor might
even not use cliprect.

For window|pixmap|... resizing can the following scheme fit:

Single buffered path (ie no backbuffer but client still render
to a private buffer).
1) X get the resize event
2) X ask drm to allocate new drawable with the new size
3) X enqueu a query to drm which copy current buffer content into
   the new one and also update drawable so further rendering
   request will happen in the new buffer
4) X start using the new buffer when compositing into the
   scanout buffer

So you might see rendering artifact but i guess this has to be
exepected in single buffered applications where size change
can happen at any time.

Double buffered path:
1) X get the resize event
2) X allocate new front buffer and blit old front buffer into
   it (i might be wrong and X might not need to preserver content
   on resize) so X can start blitting using new buffer size but
   with old content.
   X allocate new back buffer
3) X enqueu a query to drm to change drawable back buffer

If there is no pending rendering on the current back buffer (old size):
4) drm update drawable so back buffer is the new one (with new size)
Finished with resizing

If there is pending rendering on the current back buffer (old size):
4) On next swapbuffer X blit back buffer into the front buffer
   (if there is a need to preserve content) deallocate back buffer
   and allocate a new one with new size (so front buffer stay the
   same just a part of is updated)
Finished with resizing

So this follow comment Keith made on the wiki about resizing, i think
it's best strategy even though if redrawing the window is slow then
one might never see proper content if he is constantly resizing the
window (but we can't do much against broken user ;))

Of course in this scheme you do the blitting from private client
buffer (wether its the front buffer or the only buffer for single
buffered app) in client space ie in the X server (so no need for
cliprect in kernel). Note that you blit as soon as step 3 finished
for single buffering and as soon as step 2 finished for double
buffering.

Does this makes sense ? Is this reasonable ?

Cheers,
Jerome Glisse

-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to