On Fri, 27 Sep 2002, Ian Romanick wrote:
> 
> > > A lot of this stuff is inherently device independent with some device
> > > dependent "direction" (i.e., *ChooseTextureFormat), but it hasn't been
> > > implemented that way.  As a reference point, my previous work removed
> > > somewhere around 450 lines of code from the MGA driver (~5%).  The gains in
> > > the Radeon weren't quite as significant (there was no radeon_texcnv.c to get
> > > rid of). This just feels like a win-win to me. :)
> > > 
> > > Having all of it in the kernel will make it easier to implement other
> > > off-screen buffers that can't be stolen (i.e., pbuffers, dynamically
> > > allocated back/depth buffer, etc.).
> > 
> > I take it you don't really mean "all" of it, ie. not the texture conversion 
> > stuff you mention in the preceding paragraph.
> 
> Right.  I wanted to move the code that actually does the allocation,
> deallocation, and stealing into the kernel.  The only problem that I saw was
> in notifying applications that their memory had been stolen.  Sending a
> signal seemed like it would work, but that also felt like a very ugly
> solution.  
> 
> Linus, do you have any suggestions on this one?

Hmm.. I don't have enough background here to make any good suggestions (I
obviously do read the list, but I skim it much more quickly than my kernel
mails). It depends a lot on what the usage pattern is. 

Sending a signal is not extremely fast, but I assume that stealing is a
fairly rare event (and signals are by no means _slow_ either, so don't get
me wrong). The main problems with signals tend to be that they are hard to
use (asynchronous code always is), and that they can be problematic with
threads (_which_ thread gets the signal? It's going to be pretty random).

There's also some inherent races here: if you need to tell the user that
the AGP memory manager stole part of the memory, then how do you make sure
he isn't just using it. And having a two-phase hand-off is _really_ slow,
since that means context switching back and forth.

If we assume that a user can only depend on it's memory map while it is
holding the dri lock (to avoid the races with memory going away while
being used), then the taking of the lock also becomes the obvious place to 
notify the user that the memory is gone.

One simple and race-free approach is to extend the locking mechanism to
include a list of required memory regions (maybe the list can be a single
element - this all depends on what the usage pattern is). So instead of 
just asking for the lock, you ask for the lock _and_ access to your memory 
region X - and if the memory is gone it returns with some logical error 
code to tell the user to re-allocate and fill in the texture cache.

(Think of the locking as "page faulting" in the AGP memory - the kernel
keeps track of the cache of AGP regions, and if the kernel decided to give
your region away then you need to re-prime the dang thing).

Would that be an acceptable approach? It should be fairly simple to make: 
each AGP memory area that the kernel keeps track of needs to have a few 
extra bits associated with it:

 - current owner / validation / cookie whatever
 - "am I filled in?" (ie a "cache valid" bit)
 - usage count or "locked bit" to allow clients to say that the cached 
   region cannot be stolen.
 - possibly things like dirty and accessed bits that may matter to the 
   stealing policy

Think of it as a regular data cache that just happens to be sw managed.

                Linus



-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to