man, 04.08.2003 kl. 22.38 skrev Ian Romanick:
> First, let me point out that taking that type of a tone with open-source 
> developers is likely do nothing but get you ignored.

Since linux kernel developers seem to have no problems with using any
kind of "tone", I was expecting a relatively moderate tone like mine to
be pretty unremarkable among open-source developers. Then again, I
suppose it could be argued that kernel developers are free-software
developers, not open-source developers, in which case I'll try to be
more careful.

> Especially in this 
> project, we have enough work to do for 10 or 20 times as many developers 
> as we have working.

I didn't ask you to do it. I asked if it was feasible. If it was, I
could do it myself when I might need it enough (though DRI would
probably have to support pbuffers first, which is probably not something
I could easily do myself).

> Being so "intense" is not likely to enamour anyone 
> to your cause.  I'm not saying that because I'm threatening to ignore 
> you or anything like that.  I'm saying that because I'd like to keep 
> this as a useful, technical discussion and *NOT* a flame-war.  I've seen 
> that happen too many times in too many different projects...

Me too, actually, that's why I react to attempts to reduce it to a
matter of philosophy, like it seemed you were doing (or at least
defending).

> Second, did you *read* any of the material that I referenced in my 
> previous message?  Most of the things you mention (both above and below) 
> were discussed and considered in those threads.

No, I just took your word for it that those issues weren't really
considered, and that the 'really interesting' material wasn't there, so
I didn't.

> Third, the "vendor/renderer strings" word-around is a completely 
> different issue.  To the best of my knowledge, that was not an issue of 
> performace.  That was an issue of certain hardware not implementing 
> certian functionality (i.e., certain blend modes) correctly.  That some 
> hardware can't be made to follow the spec is unfortunate, but it is not 
> something that an additional API should be created to work-around. That 
> basically degrades the standard from a set of rules to a set of vague 
> guidelines.  Non-graphics APIs don't do this, and neither should a 
> graphics API.

If I understand right, those drivers did not follow the spec exactly,
simply because following it exactly would require a software fallback,
which would probably be... slow, something not desired for some reason.
Now here's a thought - implement the driver correctly, let apps detect
that this particular configuration is slow, and then let apps configure
the more limited but fast texture environment instead if it works for
them.

Then the standard is once again a set of unbreakable rules. Nobody
loses.

> >>The problem with it is, especially in the case of the MGA, changing one 
> >>subtle thing (like changing the texture constant color from 0x00ffffff 
> >>to 0x00ffff00) can change whether or not there is a fallback.  I don't 
> >>think that apps should or would, at run time, detect these types of 
> >>things and change their behavior.
> > 
> > But Direct3D allows them to do exactly that, and this facility is
> > *used*. A lot. Maybe you don't like it, but game developers are not
> > interested in having users complain about "your game runs like crap on
> > my G400, but it runs quake3 fine, can't you program?" if they can avoid
> > it by simply checking at runtime that "using this nifty texturing method
> > causes a software fallback in the user's driver, probably because of
> > hardware limitations, switch to this other less cool method instead",
> > and thus get a decent framerate even on low-end cards, and still be able
> > to take advantage of high-end cards.
> 
> You're right D3D does let you do that.  And I have yet to hear a single 
> ISV or IHV say, "I really like this functionality.  It kicks butt!" 
> Instead what I have heard is, "Can I have my fingernails pulled out 
> instead?"

I suppose that could be one reaction to software fallbacks not being
available in Direct3D, so that using a configuration the device supports
is mandatory. But this sounds more like a "oh we have to implement all
these techniques to support all this hardware", than a "this api is
unnecessarily convoluted". If you have actually heard about a better
alternative for games that need max performance, not max correctness,
then I'd be happy with that, but I'm not aware of such an alternative.

> >>Either it needs a constant color of 
> >>0x00ffff00 or it doesn't.
> > 
> > If you mean the app, then this is a naive view. In most cases, the
> > engine may not *need* such a constant color, it can work fine without
> > it, it'll just disable the particular effect that needs it, or replace
> > it with a less realistic and less demanding one, or just a more
> > compatible multipass technique, as software fallbacks are still worse.
> > But it'd still be nice for the engine to know when this constant color
> > does *not* cause a software fallback, so that the more demanding
> > technique can be used if the user upgrades his card.
> 
> I think you've missed the point here, which is my fault.  I gave an 
> example of the core problem instead of stating it.  The problem is that 
> there is a countably infinite number of reasons why a fallback could 
> happen.  No driver developer (closed-source or open-source, Windows or 
> Linux) is going to put a rich interface in their driver to explain why a 
> fallback happened.  Even if they did, no application developer (at least 
> not any that have deadlines to meet) will code to test the myriad 
> possible reasons.  Nobody has time for that in their schedules.

I don't buy this argument. ValidateDevice doesn't do a lot of explaining
either. The interface it presents is not hard to implement. I'm only
asking for knowing at runtime that with the current environment, a
fallback kicks in. *Why* it kicks in is secondary and can be checked at
development time with the debugging techniques you already mentioned,
but when the game is finished and deployed, only *whether* the fallback
kicks in really matter.

On the usage side, game developers also only have a finite number of
texture environment techniques they try to validate, then they use the
best one that works. (And in OpenGL, if the apps don't care for max
speed, they don't have to check, they can just let the software fallback
kick in, right?)

> Even if they did, that wouldn't necessarilly tell the application 
> anything useful.  Knowing that a fallback of some sort happened doesn't 
> mean squat.  For example, there are a number of cases in the open-source 
> R100 and R200 drivers that can cause a fallback from TCL mode.  It 
> doesn't mean that rendering will be "slow."  It doesn't mean that the 
> application should render a different way.  It just means that a 
> software path was used instead of a full hardware path.

I'm not talking about TCL. I'm talking about texture environment.
Hardware texturing still happen with software TCL, right?

> What applications *need* to know is if a particular set of rendering 
> parameters is fast enough to provide the desired user experience.  That 
> is an orthogonal issue to whether or not a particular software path was 
> taken.  For example, it may be possible to render with some settings 
> fully hardware accelerated but still only get 2 fps.  Those cap bits 
> didn't really help the application there, did they?

I don't buy the "no interface is a panacea so nothing should ever be
implemented" argument either.

> This becomes even less helpful with vertex programs.  I can fully 
> imagine a time (now, if you ask Intel) when it is much faster to run a 
> vertex program on the CPU than on an "earlier" generation programmable 
> card.  Does the API tell the application that a vertex program is a 
> "software" fallback then?  This has already caused problems with some 
> applications on graphics hardware that supports fragment programs but 
> use the CPU for vertex programs.  Some D3D applications would detect the 
> software vertex program path and (incorrectly) not use vertex or 
> fragment programs.

Vertex programs is not what I'm talking about either. I don't have a
need to know if vertex programs are done in software or hardware. Most
Direct3D games don't seem to care all that much either (although they're
forced to handle it). It's the texturing that really matter.

> That's the problem, and that's why nobody implemented anything.  At best 
> the cap bits or fallback flags are a hint, and at worst they are an 
> outright lie.

If you really wanted to avoid that, you could define the query to say
"fast-path" and "slow-path", instead of "hardware" and "software".

> Nobody here wanted to spend their time implementing 
> something that they couldn't see would solve the real problem.

Well, nothing can solve the real problem - old graphics cards that hang
on to their users - the issue is about dealing with it in a way that
makes the users happy for those developers that need them to be.

> Instead, 
> we've spent our time improving the performance of our drivers and 
> improving the ways that applications can measure their frame rate.  Many 
> of the open-source drivers support GLX_MESA_swap_frame_usage (basically 
> a GLX version of WGL_I3D_swap_frame_usage), which can be useful for 
> measuring performance.

Okay. So instead of pulling out your fingernails by calling setting up
some environments and calling ValidateDevice a couple of times, you have
to actually make your game's initialization routine *profile* all the
texturing techniques on startup by drawing some rotating gears or
something textured in various ways and see which one gives the best FPS.
Neat idea, and probably useful in its own right, but I'm not sure that
programming such an initialization routine isn't going to take anything
off anybody's game development schedule, or get any users to complain
about the long game startup.

> > Just look at GTA3 to see how important this is - it does not have *any*
> > 3D options whatsoever - it's designed to Just Work on the user's system,
> > autodetecting its capabilities and tuning itself to it. And Direct3D
> > lets it - why can't OpenGL implementations be as end-user-friendly? Can
> > Linux really win on the desktop if 3D games can't be made this simple?
> 
> Here's something else for you to think about.  All of those companies 
> that make D3D drivers also make GL drivers.  Don't you think that if 
> ISVs (i.e., their customers!) really needed this type of functionality 
> that at least one of them would have made an EXT or vendor specific 
> extension to provide it?  Yet, none of them have.  Ever.

And what's the correlation between ISVs that use OpenGL for various
solutions, and PC game developers that target a wide range of end-user
hardware (and thus spend a lot of resources making it work great on all
of it)?

I'd really like to hear something more technical than this "this is not
needed" rhetoric.

For example, where Direct3D's ValidateDevice goes wrong and what can be
done to make it more useful, or easier to implement. (I do think it's
more important to make this easier for DRI driver developers than for
any application developers, since this feature is probably pretty much
only going to be used by game developers with a lot of available
resources to waste on HW compatibility in any case.)




-------------------------------------------------------
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to