On Thu, Oct 17, 2002 at 10:09:28AM -0700, Allen Akin wrote:
> On Thu, Oct 17, 2002 at 09:22:37AM -0700, Ian Romanick wrote:
> | ...
> | So, I asked a couple people around IBM what the accepted practice was.  I
> | was told that an implementation is not required to export extension strings
> | for extensions that are required for its advertised OpenGL version.  I was
> | then told about Nvidia's technique.
> 
> It's true that you don't have to advertise extensions that match the
> functionality in your version of the core.  However, if you supported an
> extension in the past, you're likely to break a bunch of apps if you
> *stop* supporting it in a new release.

Fair enough.  Since EXT_texture3D would be the only case of that for DRI /
Mesa, I wouldn't propose that either.

> Also, I wouldn't want to encourage app developers to use the absence of
> an extension string to determine whether a core function is hardware
> accelerated.  There are plenty of corner cases now (Brian mentioned
> one), and as programmability becomes more widely available, there are
> about to be a *lot* more.

I'm not suggesting that the semantic be "if it's in the extension string
then it is absolutely accelerated."  There are plenty of other things in
core OpenGL that don't meet that.  I am suggesting that the semantic be "if
it's NOT in the extension string then it is absolutely NOT accelerated."

> (For example, some machines have native
> high-precision exponentiation and logarithm.  Others don't, and have to
> emulate them with an instruction sequence that's roughly an order of
> magnitude slower and requires extra registers.  The host CPU might be
> faster if a vertex program is heavily loaded with such instructions.  Is
> "hardware accelerated" meaningful in cases like this?)
> 
> During the long debate over ARB_vertex_program, the ARB reached the
> consensus that we need to address the "is it fast" question directly.
> So in the meantime I wouldn't want to tell people to infer performance
> parameters by using mechanisms that weren't designed for that purpose.

>From reading ARB meeting minutes, I can see that this problem has long
plagued the ARB.  My person opinion is that OpenGL drier and application
developers need to take a cue from other computer optimization:  the only
way to determine if something is "fast enough" is to try it.  It sucks, but
I really think that's the reality of it.  What is "most optimal" is a very
fast moving target.  

As OpenGL providers, the least we can do is tell app developers, "I
guarantee that will be slow." There is already president for that in the
GLX_visual_rating extension.  In that extension, GLX tells the app nothing
about the performance of a visual or tells it that the visual is guaranteed
to be slow.  The OpenGL version vs. extension string hack (and it IS a
hack!) does essentially the same thing.

This isn't a big deal for me.  I've just noticed that over the past 8 or 10
months the issue of exposing extensions that are guaranteed to be fallbacks
(texture_cube_map being a prime example) to get to 1.3 or 1.4 has come up
several times.  It has always been rejected on the argument that we don't
want to expose it because apps may / will EXPECT it to be accelerated when
we know that it's NOT.  I'm only suggesting that this is a possible
compromise. :)

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


-------------------------------------------------------
This sf.net email is sponsored by: viaVerio will pay you up to
$1,000 for every account that you consolidate with us.
http://ad.doubleclick.net/clk;4749864;7604308;v?
http://www.viaverio.com/consolidator/osdn.cfm
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to