On Sat, 29 Jan 2005 12:05:23 -0500, Timothy Miller <[EMAIL PROTECTED]> wrote:
>
> > > Another thing to realize is that it's unusual to have more than one GL
> > > client running at one time, but you USUALLY have LOTS of X11 clients.
> > > They all allocate gobs of pixmaps, and you really CAN run out of
> > > graphics memory, and the user should never have to know that.
> >
> > While this is true today, I would prefer a more forward-looking approach.
> > With developments like the Cairo rendering library, why shouldn't *every*
> > local X11 client be able to accelerate rendering operations directly,
> > including 2D operations?
>
> In my world, network transparency of X11 is something that can never
> be sacrificed. Network transparency is X11's greatest strength.
> Indeed, I think that the fact that OpenGL isn't network transparent
> (other than via GLX) is one of its greatest weaknesses.
Yes, network transparency is an absolutely essential feature. People
who complain about it usually do not understand how often it is used,
and how negligible the performance impact is.
But your statement that OpenGL is not network transparent is simply
incorrect. OpenGL was clearly designed to be network transparent, and
it is written into the specification. You simply have to look for the
key words. Go back and read the spec again, and this time look
specifically for where it talks about state. Now look for where it
talks about client state. Every single item in the state is associated
either with a client or the server, and there are times the spec has
to talk about the differences and how/when data moves between them.
(You might have problems finding the word "server" in there, because
the spec is written mostly about the rendering details which are by
definition the server so that is simply assumed, and details about the
client are explicitly identified.) Try glPushAttrib and
glPushClientAttrib for a quick example.
Part of the difficulty in seeing it is that many people assume
distinctions between the client and the "server" are hardware-based,
with the server being the graphics card and the client being a process
on the host machine. While this is often true, the spec has been
written in such a way that a network connection can be viewed the same
as the PCI/AGP/etc. bus, and it all still makes sense. (Note though
that this is not made explicit in the spec because very few
implementations do all of OpenGL in hardware so host-based software
fallbacks are almost universal.) Another difficulty is that OpenGL
does not define a wire protocol like X11, but defers such details to
other APIs such as GLX and WGL. Personally, I consider this
appropriate.
You can also see people's bias in their use of OpenGL features. The
CallList mechanism was designed with network transparency in mind. But
most people eschew it in favor of vertex arrays, which (if you read
carefully) are written to function correctly in the face of a network
connection, but not at peak performance. Notice that vertex arrays
were added to OpenGL in a later version, they were not put there
initially.
> With the convergence of 2D and 3D, however, my belief is that X11 and
> OpenGL both need to be replaced by a single, unified, centralized,
> network-transparent GUI system that can emulate both.
I've always been a fan of Fresco, but not many seem to share my opinion.
> > [snip]
> > > X11 is a bottleneck, since it's single-threaded and every graphical
> > > client on the box relies on it. If you don't give it means to be more
> > > efficient and flexible, you'll CRIPPLE it and thereby cripple all X11
> > > clients.
Correction: XFree86/Xorg are single-threaded. Nothing in X11 demands
that the server be single-threaded. There are even some graphics cards
designed to function well in the face of multiple rendering threads.
(Recent 3DLabs cards can maintain multiple rendering states and do
context switches between them.)
> > In general, I agree. However, does the X server really require root
> > privileges to be efficient enough? Why shouldn't a normal client be able to
> > be as efficient? Remember my Xnest example, and the possibility of a
> > client-side accelerated Cairo.
>
> At least under Solaris, doing an ioctl context switch to initiate DMA
> is so horribly slow that it was absolutely necessary for the X server
> to be able to initiate DMA transfers directly--it had to be able to
> talk to the hardware directly.
>
> But being able to talk to the hardware directly like that is a
> security problem for OpenGL because they're unpriveledged user
> processes.
Well, it is a security problem for one particular method of
implementing OpenGL. If the implementation executes in the context of
the GL-using application and talks to the hardware instead of to a
separate process, then yes. But just because everyone implements it
this way does not mean they had to or that the problem is due to
OpenGL. ;)
> > Perhaps instead of a root/non-root privilege discrimination, there could be
> > a session-leader/session-client hierarchy. The session-leader will *not*
> > get full access to the hardware, because the session-leader can be a
> > non-root process. However, the session-leader *will* be able to control its
> > clients, e.g. by revoking graphics access from a broken/runaway client, and
> > it *will* have a higher priority when it comes to resource allocation.
> > An Xnest server will be the client of the "real" X server, and it will be
> > session-leader for "its" clients. In fact, this hierarchy could also fix
> > running multiple fullscreen X servers on the same hardware by default,
> > because the parallel X servers would no longer be special - they would just
> > be clients to a controlling session-leader (this master leader could at the
> > same time be the process that controls the memory management).
>
> Well, if the connection between client and server is like X11, where
> commands can be transferred in bulk, limiting the context switch
> overhead impact, then yes, what you say makes sense, and that goes
> back to my idea of unifying X11 and OpenGL in a network transparent
> way.
Or even just eliminating X11 since OpenGL already "has" the necessary
features. Then implement X11 on top of OpenGL. You know, Cairo. =)
Network transparency, check.
Pipelining, check.
Queueing and batch processing commands, check. (This is why glFlush
exists, after all.)
Kent
P.S. - It occurs to me that it might be unfair of me to say OpenGL has
network transparency. Instead, I should be saying that OpenGL has been
designed to function on a wide variety of architectures (in the
classical sense) with great lattitude of implementation. This includes
ordinary PCs, supercomputers with highly parallel rendering backends,
graphics systems connected to host machines by high-bandwidth
backplanes, multiple-renderer multi-head machines, and stripped-down
cell phones.
P.P.S. - It has always bugged me that popular GUIs cannot do
hardware-accelerated 3D in the face of multi-head. It is technically
feasible, there are existing 3rd-party libraries that multiplex
rendering commands to multiple GL backends. Why have they not been
merged into Xorg or Windows?
--
The world before the fall
Delightful is the light of dawn
Noble is the heart of man...
-Cyan Garamonde, Final Fantasy VI
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)