Re: [Mesa-dev] New features?

1999-09-02 Thread Stephen J Baker


(Arguments with Allen are *so* informative...and also hard to win!)

On Wed, 1 Sep 1999 [EMAIL PROTECTED] wrote:

 |  You'd need to figure out which odd corner-cases could arise
 |  and how you want to handle them.  What if you use a texture
 |  while you're rendering to it?
 | 
 | You should (at the API level) simply decribe that as "undefined
 | behaviour".
 
 Would that be desirable if the user planned to store multiple texture
 images inside a single large texture, perhaps to reduce texture
 binding costs?

So, you would allow rendering to a texture at the same time as
rendering with that texture...but put some words in to allow
this so long as the area of the map you accessed didn't overlap
the area you were rendering with?  That would be very hard to
explain properly - because in some texture modes, you could
inadvertently blend to a texel that was in the forbidden zone,
or the polygon that is using the texture could become so small that
roundoff error would cause texels that are being re-rendered to
be displayed.

Alternatively I suppose you could say that whatever texels were
within the re-rendering viewport/scissor box were undefined during
the rendering.

It's all too nasty.  Just say "it's undefined" and you're covered.

 |  What if you
 |  need to load a new texture into texture memory, but can't do
 |  so because the one you're using as a rendering target has
 |  caused texture memory to be filled or fragmented?  (And what
 |  implications does that have for proxy textures, which are
 |  supposed to give you an ironclad guarantee about whether a
 |  texture can be loaded or not?)
 | 
 | No - they don't guarantee that there will be enough texture memory
 | to make the texture be resident in hardware texture memory. The
 | call only guarantees that if texture memory were currently empty
 | that your texture could be accomodated.
 
 In ordinary use, if a proxy query indicates that a texture can be made
 resident, then that *does* guarantee that you can successfully load
 the texture at any time in the future.  In fact, that's one of the
 main reasons proxy queries exist; they're the only reliable way to
 determine if you can actually use a given texture.

What the RedBook says is (quoted verbatim fron the second edition):

  "The texture proxy tells you if there is space for your texture, but
  only if all texture resources are available (in other words, if it's
  the only texture in town). If other textures are using resources,
  then the texture proxy query may respond affirmatively, but there may
  not be enough space to make your texture resident (that is, part of a
  possibly high-performance working set of textures)."

 This is possible because binding a new texture will throw out any
 other textures already in texture memory, if necessary.

Well, that's an assumption that you choose to make. It's not
guaranteed to be true (Think multitexture for example - all N
textures have to be resident at the same time - but the proxy
command didn't know that the other 4 textures you are using
in the multitexture setup were all 2048x2048 maps!). All proxy
tells you is that *IF* texture memory were free then your map
would fit.  If it isn't all free (either due to the needs of
multitexture or due to some rendering that's going on in
texture memory) then proxy didn't say whether it would work or
not.  This makes proxy textures less useful than you might hope
but doesn't mean that rendering-to-texture would break the
semantics of proxy textures.

  My point was
 that render-to-texture might introduce a new semantic that ``locks'' a
 texture into memory, thus violating the conditions assumed by the
 proxy query.

Yes - it'll certainly create a lock that makes any assumed guarantee
from the proxy texture invalid...I just feel like the proxy texture's
guarantee is rather weak anyway - and this isn't doing any noticable
harm to that.

 If you choose to make an extension that eliminates the proxy query
 guarantees, then I believe every OpenGL program must be modified to
 check *every* texture load and bind for failure, and fall back to a
 rescaled texture (or one with fewer mipmap levels).  It's worth
 thinking about this carefully...
 
Indeed.  Defining an off-screen rendering context and then a
fast copy-framebuffer-to-texture (as SGI hardware does) is a
lot safer semantically...but for hardware that can do it, rendering
directly into texture memory (for all the 'gotchas' it turns up)
is still a tempting goal.

Steve Baker(817)619-2657 (Vox/Vox-Mail)
Raytheon Systems Inc.  (817)619-2466 (Fax)
Work: [EMAIL PROTECTED]  http://www.hti.com
Home: [EMAIL PROTECTED] http://web2.airmail.net/sjbaker1




___
Mesa-dev maillist  -  [EMAIL PROTECTED]
http://lists.mesa3d.org/mailman/listinfo/mesa-dev



Re: [Mesa-dev] New features?

1999-09-01 Thread akin


Steve Baker wrote:
|  
| ...and MIPmapping with GL_LINEAR_MIPMAP_LINEAR can require 8 texels
| to be accessed per pixel. ...

And some forms of anisotropic filtering require even more.  I just
thought I'd keep the discussion simple by offering only one example. :-)

|   ... However, the RAM bandwidth isn't as bad
| as that because the rendering chip will probably have a texel cache
| that will make the overhead look more like 2x than 4x or 8x

This, too, is complicated -- depending on whether you're mipmapping,
how LOD is computed, whether the hardware designer intends to
guarantee a fill rate, etc.  But the bottom line is still the same;
you typically need to fetch more data from texture memory than from
the framebuffer, so people have traditionally designed separate
texture memories with optimization for different access patterns than
the framebuffer.

My goal here was just to make sure everyone understood the reasons why
things are designed in a particular way.

|  In addition to pixels, the framebuffer often contains a Z
|  buffer stencil buffer, etc.  These things have to be arranged
|  in memory so that they can be accessed quickly during
|  rendering.  Texture memory wouldn't normally support these
|  things unless you make the design decision that rendering to
|  texture is critically important.
|  
| That's the part that makes rendering-to-texture hard (conceptually) to
| build into a clean API.

The difference between color formats supported by textures and by
windows worries me at least as much.  Consider rendering to an S3TC
compressed texture.  :-)

| All these things explain why some devices can't render to texture
| at all. However, others *can* do so - and an extension that exposed
| that functionality would be useful.  Even on devices that can't
| do that, there may be a fast path from frame buffer to texture
| memory that can be exploited.

Yes.  Explaining the background information helps everyone understand
why the feature isn't universal, though, and why it may have radically
different performance characteristics even across machines that
support it.

| Rendering to unused areas of the frame buffer is another valuable
| trick that could be exposed on some chips.

Absolutely.  

| ... The chip must
| be *able* to render there - it's just a matter of getting past
| the API to let you do so.

That's what PBuffers and FBConfigs are all about.  Possibly they could
be used as a model for a render-to-texture extension.

|  You'd need to figure out how to manage the Z buffer and other
|  ancillary buffers that might be associated with a texture that
|  you're using as a target for rendering.  Would you re-use the
|  Z buffer associated with the window, or allocate a new one?
|  
| You certainly couldn't re-use the main Z buffer because you don't
| know if the application might want continue to use it.

I wouldn't rule it out instantly -- after all, rendering to auxiliary
buffers involves sharing the main Z buffer, and it's still useful in
many cases.

| You might decide that there is no Z buffer possible for those kinds
| of context.

Yes, though if the hardware is capable of rendering to texture while
using Z buffering, it would be nice to support it.  I image that would
be a common case, for example, when rendering an environment map.

|  You'd need to figure out which odd corner-cases could arise
|  and how you want to handle them.  What if you use a texture
|  while you're rendering to it?
| 
| You should (at the API level) simply decribe that as "undefined
| behaviour".

Would that be desirable if the user planned to store multiple texture
images inside a single large texture, perhaps to reduce texture
binding costs?

|  What happens if another thread
|  deletes a texture while you're rendering to it?
| 
| That's illegal anyway.  What happens if another thread deletes
| a texture while you are rendering *with* it?

It's legal.  Textures are refcounted, and the texture object will
survive until the last thread using it unbinds it.  The same solution
would work for rendering to the texture; you'd just need some
well-defined synchronization point at which the OpenGL implementation
can figure out that the texture is no longer in use.  Perhaps at the
next glXMakeCurrent, for example.

As with the other comments, I just wanted everyone to be aware of the
sort of problem that would need to be solved.

|  What if you
|  need to load a new texture into texture memory, but can't do
|  so because the one you're using as a rendering target has
|  caused texture memory to be filled or fragmented?  (And what
|  implications does that have for proxy textures, which are
|  supposed to give you an ironclad guarantee about whether a
|  texture can be loaded or not?)
|