Ove Kaaven wrote:

man, 04.08.2003 kl. 20.05 skrev Ian Romanick:

I think the conclusion was that as a tool for anything other than development / debugging, such a thing was not terribly useful.

Then I beg to differ. Many advanced 3D engines can switch between several types of texture rendering techniques depending on what the hardware can support. It has nothing to do with debugging, since this is all about making an application or game give maximum performance on any *user's* system, not on the developer's system. Perhaps the idea of having apps automatically tune themselves to the user's system is an unheard one for the DRI, but it's not in the real world. For example, I believe the Max Payne engine could scale itself from taking advantage of any cool environment-mapping features present in a high-end graphics card, through relatively simple and flat graphics on a low-end card, and even all the way down to doing its own software rendering on a 2D card (at least the version of MAX-FX used in 3DMark2000 could). I'm also pretty sure that the Grand Theft Auto 3 engine is similarly adaptable to the texturing capabilities of the user's system. And the fact that some Linux games have also a need for this feature, which they've worked around by checking vendor/renderer strings, should also have spoken for itself.

First, let me point out that taking that type of a tone with open-source developers is likely do nothing but get you ignored. Especially in this project, we have enough work to do for 10 or 20 times as many developers as we have working. Being so "intense" is not likely to enamour anyone to your cause. I'm not saying that because I'm threatening to ignore you or anything like that. I'm saying that because I'd like to keep this as a useful, technical discussion and *NOT* a flame-war. I've seen that happen too many times in too many different projects...


Second, did you *read* any of the material that I referenced in my previous message? Most of the things you mention (both above and below) were discussed and considered in those threads.

Third, the "vendor/renderer strings" word-around is a completely different issue. To the best of my knowledge, that was not an issue of performace. That was an issue of certain hardware not implementing certian functionality (i.e., certain blend modes) correctly. That some hardware can't be made to follow the spec is unfortunate, but it is not something that an additional API should be created to work-around. That basically degrades the standard from a set of rules to a set of vague guidelines. Non-graphics APIs don't do this, and neither should a graphics API.

The problem with it is, especially in the case of the MGA, changing one subtle thing (like changing the texture constant color from 0x00ffffff to 0x00ffff00) can change whether or not there is a fallback. I don't think that apps should or would, at run time, detect these types of things and change their behavior.

But Direct3D allows them to do exactly that, and this facility is *used*. A lot. Maybe you don't like it, but game developers are not interested in having users complain about "your game runs like crap on my G400, but it runs quake3 fine, can't you program?" if they can avoid it by simply checking at runtime that "using this nifty texturing method causes a software fallback in the user's driver, probably because of hardware limitations, switch to this other less cool method instead", and thus get a decent framerate even on low-end cards, and still be able to take advantage of high-end cards.

You're right D3D does let you do that. And I have yet to hear a single ISV or IHV say, "I really like this functionality. It kicks butt!" Instead what I have heard is, "Can I have my fingernails pulled out instead?"


Either it needs a constant color of 0x00ffff00 or it doesn't.

If you mean the app, then this is a naive view. In most cases, the engine may not *need* such a constant color, it can work fine without it, it'll just disable the particular effect that needs it, or replace it with a less realistic and less demanding one, or just a more compatible multipass technique, as software fallbacks are still worse. But it'd still be nice for the engine to know when this constant color does *not* cause a software fallback, so that the more demanding technique can be used if the user upgrades his card.

I think you've missed the point here, which is my fault. I gave an example of the core problem instead of stating it. The problem is that there is a countably infinite number of reasons why a fallback could happen. No driver developer (closed-source or open-source, Windows or Linux) is going to put a rich interface in their driver to explain why a fallback happened. Even if they did, no application developer (at least not any that have deadlines to meet) will code to test the myriad possible reasons. Nobody has time for that in their schedules.


Even if they did, that wouldn't necessarilly tell the application anything useful. Knowing that a fallback of some sort happened doesn't mean squat. For example, there are a number of cases in the open-source R100 and R200 drivers that can cause a fallback from TCL mode. It doesn't mean that rendering will be "slow." It doesn't mean that the application should render a different way. It just means that a software path was used instead of a full hardware path.

What applications *need* to know is if a particular set of rendering parameters is fast enough to provide the desired user experience. That is an orthogonal issue to whether or not a particular software path was taken. For example, it may be possible to render with some settings fully hardware accelerated but still only get 2 fps. Those cap bits didn't really help the application there, did they?

This becomes even less helpful with vertex programs. I can fully imagine a time (now, if you ask Intel) when it is much faster to run a vertex program on the CPU than on an "earlier" generation programmable card. Does the API tell the application that a vertex program is a "software" fallback then? This has already caused problems with some applications on graphics hardware that supports fragment programs but use the CPU for vertex programs. Some D3D applications would detect the software vertex program path and (incorrectly) not use vertex or fragment programs.

That's the problem, and that's why nobody implemented anything. At best the cap bits or fallback flags are a hint, and at worst they are an outright lie. Nobody here wanted to spend their time implementing something that they couldn't see would solve the real problem. Instead, we've spent our time improving the performance of our drivers and improving the ways that applications can measure their frame rate. Many of the open-source drivers support GLX_MESA_swap_frame_usage (basically a GLX version of WGL_I3D_swap_frame_usage), which can be useful for measuring performance.

Just look at GTA3 to see how important this is - it does not have *any*
3D options whatsoever - it's designed to Just Work on the user's system,
autodetecting its capabilities and tuning itself to it. And Direct3D
lets it - why can't OpenGL implementations be as end-user-friendly? Can
Linux really win on the desktop if 3D games can't be made this simple?

Here's something else for you to think about. All of those companies that make D3D drivers also make GL drivers. Don't you think that if ISVs (i.e., their customers!) really needed this type of functionality that at least one of them would have made an EXT or vendor specific extension to provide it? Yet, none of them have. Ever.





------------------------------------------------------- This SF.Net email sponsored by: Free pre-built ASP.NET sites including Data Reports, E-commerce, Portals, and Forums are available now. Download today and enter to win an XBOX or Visual Studio .NET. http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to