Keith Whitwell wrote:
Ian Romanick wrote:
The document is not 100% complete. A few sections, such as the replacement policy, need more discussion before they can be completed. I have also included an "issues" section in the spirit of "issues" sections in OpenGL extension documents. I think the most significan issue is 3.13, but I don't think any of them are trivial. I fully expect this section to grow. :)

3.13 - I think the only sane option is '2' -- fallback to software when out of memory. I'm pretty sure GL doesn't allow you to silently throw away mipmaps under memory pressure.
That was the consensus that I got from several people around here that worked on OpenGL on AIX. I seem to recall that this strategy was used a long time, but it caused problems with Quake2. The problem was that the app would look at the value reported by GL_MAX_TEXTURE_SIZE and run with it. If that caused texture memory to be over-commited, then the app ran like a dog.

The best answer might be to make it a configuration option. Let users decide to either fallback to sw rasterization, limit the maximum texture size, or use some other dirty trick to prevent the fallback (such as dropping mipmaps under memory pressure). It's a general problem that doesn't have a good solution in OpenGL.

I would *really* like to discuss the document and anything else related in Monday's #dri-devel meeting. Hopefully people can make it & will have a chance to digest the document by then.
I don't know if I'm going to make the next couple of irc meetings, I'm kindof on holiday at the moment.
Ah! Well I hope you're having a good time. I appreciate that you could respond. I know that if I were on holiday, I wouldn't even read dri-devel. :)

At this point I'd like to throw a spanner in the works, maybe. A longstanding issue with the current memory manager that isn't addressed in your document is the struggle between 2d and 3d rendering for available offscreen memory space. I wonder if you can add a note describing how this scheme might cooperate with the XAA offscreen memory allocator to ensure memory allocation is shared between the 2d & 3d worlds according to demand?
A couple of other people had mentioned that as well. It is added as issue 3.14.

A small nitpick: you talk about wrapping _tnl_run_pipeline(). I think this isn't a good approach, particularly as not all rendering proceeds through that function, in fact there is no guarentee that drivers even include that function. The cutdown embedded radeon driver on the Mesa embedded-1-branch is an example of such a driver. I would suggest that the points at which we aquire hardware locks are better suited to ensuring resources are present -- although this may be too late to decide that a fallback is required... Hmm...
I based this on what I saw in the radeon driver. It is possible that my analysis of that driver was not correct. My impression what that textures were brought back into memory in the wrapper function. I thought about doing it at lock acquisition time, but I saw a couple of problems with that. The biggest problem was that the state at the time the lock is acquired may not be the same as when rendering begins. I could see that leading to cases where textures or vertex buffers are brought into memory (and other data is displaced) that won't be used.

Basically, I wanted to find the point where we could say "this is the state that will be used for some actual rendering." At that point, we would make sure that state was available.

In any case, there was some good discussion of the design on #dri-devel today. I'm going to make some updates to the document and send out a new version in the next day or so.



-------------------------------------------------------
This SF.NET email is sponsored by:
SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See!
http://www.vasoftware.com
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Reply via email to