On Tue, Jun 18, 2002 at 12:09:02AM +0100, Jos� Fonseca wrote:
> On 2002.06.17 23:19 Keith Whitwell wrote:
> > 
> >> 
> >> We could overcome the GLX difficulties in the same way we do now in 
> >> libGL with the direct rendering.
> >> 
> >> But I still don't understand why vertex arrays would be such a problem 
> >> over shared memory. Aren't they basically just readed and transformed 
> >> into Mesa's vertex buffers? Could't the OpenGL drivers just read these 
> >> vertex arrays directly of the client memory space from the X process?
> > 
> > There's no indication of the 'top' of the vertex buffer, so you don't 
> > know how  much to transfer.  There's no semantics to tell you whether 
> > the vertex buffer contents have changed, so you don't know how often to 
> > transfer.
> 
> But why even transfer in the first place? Why not simply map parts of the 
> vertex buffers into the X memory space as they are needed, or is there any 
> impossibility on the Linux architecture to do that?

This is an old message, but I didn't see a reply to this point.  The reason
is that the indirect rendering path they've been talking about is the *same*
one used by remote clients.  A client running on a different box can't
directly map anything, so the indirect clients on the same box (as the X
server) have to follow the same rules.

-- 
Tell that to the Marines!


-------------------------------------------------------
Sponsored by:
ThinkGeek at http://www.ThinkGeek.com/
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to