On Thu, Dec 05, 2002 at 02:13:26PM -0800, magenta wrote:
> On Thu, Dec 05, 2002 at 01:23:42PM -0800, Ian Romanick wrote:
> > > 
> > > Yes, I did reread it, which is why I then suggested glXChooseVisual as the
> > > point of change (since it's in visual selection that it's enabled), which
> > > is exactly the reason why it SHOULDN'T be in the driver - a wrapper library
> > > could enable GL_ARB_multisample for the ATI and nVidia vendor drivers, even
> > > though it couldn't do it for DRI at present.  And if it doesn't work, then
> > > the user turns that tweak off.
> > 
> > Well that sucks.  I guess I'd never be able to enable super-sampled FSAA
> > with your wrapper on my R100.  Even though I CAN do it with a driver-based
> > tweak utility on some other operating system.
> 
> But it's not even supported in the DRI driver on the R100...  It's not like
> the wrapper can magically make functionality which isn't there to begin
> with appear, but in order to do the tweak in teh driver itself, the driver
> would have to support it anyway!  Unless I'm totally missing something
> about how FSAA is done in OpenGL, in which case I'd love for someone to
> explain.  All of the documents I've found ont he web indicate that
> GL_ARB_multisample is the way you do it, even in Windows.

It is one way.  It's the way that the OpenGL ARB has sanctified with an
extension.  It's not the only way.  On Windows with a Radeon, for example,
if you click the 'FSAA 2x' box, it will tell the driver to render to a
buffer twice as wide as requested and scale down when it does a blit for the
swap-buffer call.  'FSAA 4x' does 2x width & 2x height.  This is called
super-sampling.

Any card that has a blitter that can scale and filter can do this.  Even the
RagePro or G200 hardware.

Multisampling is a different kettle of fish.  It's more efficient, produces
better results, but requires specific hardware support.

> > > No, because there's a very large difference between "disabling TCL for
> > > debugging purposes" and "enabling 32bpp textures for quality purposes."
> > > Why would a user want to disable TCL for anything other than debugging
> > > the driver?
> > 
> > Disabling TCL was the only example I could come up with in the existing
> > drivers.  There are other valid examples in my thread with Allen on
> > selecting different driver fast-paths.
> 
> Okay, the only example I can find right now (sourceforge's mailinglist
> archival doesn't have the best threading interface I've seen...) is about
> the vertex formats used in the internal representation.  Obviously the
> wrapper library couldn't configure that, but I don't understand why this
> needs to be configured in the driver to begin with - if the driver supports
> all the different vertex array formats internally, why doesn't it just
> select which "conversion" (or fast path, or whatever) to perform based on
> the data which is presented to it by the application to begin with?
> 
> Like, I thought that was the point to OpenGL's design in general - that the
> driver would use the high-level information that's present in order to tune
> its low-level operation, completely transparently to the user and
> application.

It does *work*.  However, it can be very difficult for the GL to make the
right choices.  Look at the work of CPUs, for example.  There's lots of
prefetch hardware in there to make the memory system faster, but adding
prefetch instructions in the right places can make a world of difference.

Any time a library make just-in-time optimization / fast-path choises, there
is some chance that will be wrong.  If it happens to be wrong for some huge,
critial app, that can be the difference in which system somebody chooses.

Right now there is no example of this in DRI.  That doesn't mean that there
won't ever be.  The coming future of vertex & fragment programs only
INCREASES the likelyhood.

> > > What's wrong with just calling the appropriate function on all
> > > glTexImage2D() calls?
> > 
> > Because that behavior is wrong.  It would over-ride settings that the app
> > would make.  It's fine to change the default aniotropy from 1.0 to 8.0, but
> > if the app specifies 4.0, that setting had better be respected.
> 
> Good point.
> 
> Still, I think that for pretty much all applications, it'd be a pretty
> simple matter to do something like this:
> 
> int newBind;
> 
> void glBindTexture(GLint handle)
> {
>       newBind = 1;
>       real_glBindTexture(handle);
> }
> 
> void glTexParameter{if}[v](GLenum target, GLenum param, {whatever} val)
> {
>       if (newBind)
>       {
>               newBind = 0;
>               TweakTexture();
>       }
>       real_glTexParameterf(target, param, val);
> }
> 
> TweakTexture()
> {
>       /* setup anisotropic, trilinear, etc. */
> }
> 
> This way, the TweakTexture() function would only be called the first time a
> parameter is modified after a texture object is bound, and before the
> application does its own parameter modification.  Chances are the
> application will only do glTexParameter when it's first setting up the
> texture, and if it does do glTexParameter later on (in order to change its
> own mipmap settings, etc.) hopefully it'll do whatever anisotropic
> configuration it would do on its own then. :)

But that's wrong.  If I do the following sequence in my code (assuming the
GL support 16.0 aniso), I expect to see 16.0 printed out.

        glBindTexture( GL_TEXTURE_2D, 1 );
        glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROY_EXT, 16.0 );
        glBindTexture( GL_TEXTURE_2D, 2 );
        glBindTexture( GL_TEXTURE_2D, 1 );
        glGetTexParameterfv( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROY_EXT, & foo );
        printf( "aniso = %f\n", foo );

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to