| I was dismayed to learn that most cards do support render-to-texture for
| environment mapping etc. but it was not supported on OpenGl. ...

Well, you've been misled slightly.  Many cards do support some form of
render-to-texture under D3D, but I don't think that's true of most
cards.  Also, on some cards D3D's render-to-texture is actually
implemented with a copy (just like glCopyTexImage), not as a direct
rendering operation whose destination is texture memory.

|      ... Now I know mesa has gone outside the bounds with driver
| support, e.g. 3dfx. And I was wondering if the same can be done with
| needed features that don't seem to be getting any attention from IHV's...

Sure.  I don't want to discourage you from looking at
render-to-texture extensions, but I think it's a good idea to discuss
some of the design parameters you'll need to take into account.

First, one might well ask why texture memory is treated differently
from framebuffer memory in the first place.  Why not have just one
chunk of memory that you can use for both texture and rendering? 
Loosely, the answer is that you can often get better performance by
using physically separate memories or organizing the data differently
for different uses.  Here are some examples:

        Bilinear filtering requires that you access four texels for
        each pixel you draw; therefore the bandwidth requirements for
        reading texture memory can be as much as four times higher
        than those for writing pixels in the framebuffer.

        Framebuffer memory tends to be accessed sequentially when it's
        scanned out to the DACs for display, so there's a need to
        optimize for that case (and make sure that the video timing
        can always be met without glitches).  Texture memory tends to
        be accessed in little clusters of four adjacent texels (in
        each mipmap level).

        In addition to pixels, the framebuffer often contains a Z
        buffer stencil buffer, etc.  These things have to be arranged
        in memory so that they can be accessed quickly during
        rendering.  Texture memory wouldn't normally support these
        things unless you make the design decision that rendering to
        texture is critically important.

Historically, hardware designers have done things like organizing
texture memory in multi-way interleaved fashion, rather than linearly,
to get better performance.  It has also been common to use physically
separate texture memory that's driven by different pins on the
rendering chips, so that it can be accessed in parallel with the
accesses to framebuffer memory.  All of these things make it harder to
render to texture memory, and to come up with an API design that
handles all the cases gracefully.

Now it's possible to design your graphics system so that texture
memory and framebuffer memory are more unified.  Typically you would
use caches, deep pipelines with prefetch, and other techniques to
paper over the differences in access patterns.  On such systems it's
a lot easier to render to texture memory.

So let's assume that you've got hardware that can support rendering to
texture memory.  What changes would you need to make to the OpenGL API
to support it?  Here are some things that come to mind:

        You would need to expose a new concept for a drawing
        surface.  Since OpenGL currently has notions of pixel format
        descriptors or X11 Visuals that are associated with windows,
        you'd need to extend that somehow to textures.  New pixel
        formats probably would be needed.  Changes to glXMakeCurrent
        (or its equivalent on other systems) would be needed.

        You'd need to figure out how to manage the Z buffer and other
        ancillary buffers that might be associated with a texture that
        you're using as a target for rendering.  Would you re-use the
        Z buffer associated with the window, or allocate a new one?

        You'd need to figure out which odd corner-cases could arise
        and how you want to handle them.  What if you use a texture
        while you're rendering to it?  What happens if another thread
        deletes a texture while you're rendering to it?  What if you
        need to load a new texture into texture memory, but can't do
        so because the one you're using as a rendering target has
        caused texture memory to be filled or fragmented?  (And what
        implications does that have for proxy textures, which are
        supposed to give you an ironclad guarantee about whether a
        texture can be loaded or not?)

You could also consider some alternative designs, like a special copy
operation from framebuffer memory to texture memory that would do
nothing on hardware that supported rendering to texture, and behave
something like glCopyTexImage on hardware that didn't support
rendering to texture.

That's about all I can think of at the moment, though I'm sure I've
missed a few things.

Regards,
Allen


_______________________________________________
Mesa-dev maillist  -  [EMAIL PROTECTED]
http://lists.mesa3d.org/mailman/listinfo/mesa-dev

Reply via email to