On Thu, Dec 05, 2002 at 03:56:09PM -0800, Ian Romanick wrote:
> > 
> > But it's not even supported in the DRI driver on the R100...  It's not like
> > the wrapper can magically make functionality which isn't there to begin
> > with appear, but in order to do the tweak in teh driver itself, the driver
> > would have to support it anyway!  Unless I'm totally missing something
> > about how FSAA is done in OpenGL, in which case I'd love for someone to
> > explain.  All of the documents I've found ont he web indicate that
> > GL_ARB_multisample is the way you do it, even in Windows.
> 
> It is one way.  It's the way that the OpenGL ARB has sanctified with an
> extension.  It's not the only way.  On Windows with a Radeon, for example,
> if you click the 'FSAA 2x' box, it will tell the driver to render to a
> buffer twice as wide as requested and scale down when it does a blit for the
> swap-buffer call.  'FSAA 4x' does 2x width & 2x height.  This is called
> super-sampling.

My understanding was that the ARB_multisample extension could be
implemented using supersampling (even if it's not actually done using
multisampling), and that enabling ARB_multisample was functionally
equivalent to clicking the FSAA checkbox in the driver.  If that's not the
case, then that's been the source of my confusion all along.

> Any card that has a blitter that can scale and filter can do this.  Even the
> RagePro or G200 hardware.
> 
> Multisampling is a different kettle of fish.  It's more efficient, produces
> better results, but requires specific hardware support.

Yes, and I do understand the difference.

> > Like, I thought that was the point to OpenGL's design in general - that the
> > driver would use the high-level information that's present in order to tune
> > its low-level operation, completely transparently to the user and
> > application.
> 
> It does *work*.  However, it can be very difficult for the GL to make the
> right choices.  Look at the work of CPUs, for example.  There's lots of
> prefetch hardware in there to make the memory system faster, but adding
> prefetch instructions in the right places can make a world of difference.
> 
> Any time a library make just-in-time optimization / fast-path choises, there
> is some chance that will be wrong.  If it happens to be wrong for some huge,
> critial app, that can be the difference in which system somebody chooses.

But how could it be wrong in such a way that some other choice could be
right?  I mean, if the application sends the vertex array to it in a
certain format, either the card can support that format or it can't and the
driver has to convert it, right?  So is it just a matter of which
conversion is least-sucky?

> Right now there is no example of this in DRI.  That doesn't mean that there
> won't ever be.  The coming future of vertex & fragment programs only
> INCREASES the likelyhood.

Wouldn't vertex/fragment programs already be using the card's native
format(s) though?  Once the client state is all configured and that
glDrawElements() call happens, wouldn't the driver have to either decide
that the format is something the hardware supports, or convert it into
something which it does?

> > This way, the TweakTexture() function would only be called the first time a
> > parameter is modified after a texture object is bound, and before the
> > application does its own parameter modification.  Chances are the
> > application will only do glTexParameter when it's first setting up the
> > texture, and if it does do glTexParameter later on (in order to change its
> > own mipmap settings, etc.) hopefully it'll do whatever anisotropic
> > configuration it would do on its own then. :)
> 
> But that's wrong.  If I do the following sequence in my code (assuming the
> GL support 16.0 aniso), I expect to see 16.0 printed out.
> 
>       glBindTexture( GL_TEXTURE_2D, 1 );
>       glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROY_EXT, 16.0 );
>       glBindTexture( GL_TEXTURE_2D, 2 );
>       glBindTexture( GL_TEXTURE_2D, 1 );
>       glGetTexParameterfv( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROY_EXT, & foo );
>       printf( "aniso = %f\n", foo );

Okay, so let's step through that... glBindTexture() sets the flag then
calls the real glBindTexture.  then glTexParameterf sets anisotropy to 8,
clears the flag, then sets anisotropy to 16.  Then the texture is rebound,
and then the glGetTexParameter gets the anisotropy, which is still set to
16.  So the code as you've written it has no affect on the correctness. :)

However, yeah, I do see an easy way for it to become incorrect, if you set
some other glTexParameter call between the last glBindTexture and the
glGetTexParameter.

However, my suspicion is that most applications don't do things that way,
and if there is an application which does it that way, the anisotropy tweak
should probably be disabled anyway, since the point to the tweak is
enabling anisotropy in applications which don't support it to begin with.

So yes, there are situations where it could lead to incorrect behavior, but
you have to go pretty far out on a limb to find one, and still have to use
a tweak which isn't supposed to be used in that situation anyway.

I suppose that an easy fix would be for the tweak library to disable the
anisotropy tweak if it sees that the application is explicitly setting
anisotropy to begin with.  Like, if an application uses something, then
just trust the application to do the right thing.  After all, the tweak
library is intended to add new behavior, not override existing behavior.

-- 
http://trikuare.cx


-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to