magenta wrote:
On Thu, Jan 16, 2003 at 05:33:42PM -0800, Ian Romanick wrote:

1. In a scheme like this, how could processes be forced to update the
   can-swap bits on blocks that they own?
Should it even be possible for one process to swap out other processes'
context data? Alternatively (forgive me if this sounds a bit naive), could
the swapping be handled by agpgart, which just changes the memory mapping
of the allocated pages in the background?  Sort of an added VM layer, only
it would swap to system memory (which could then be swapped to disk)...
Changing which physical pages back the AGP mapping would help, but you have to remember that the memory manager also manages on-card memory. If back-buffers and depth-buffers are managed the same way, you could imagine that an application could use all of the on-card memory and prevent another context from being able to allocate a back-buffer.

2. What is the best way for processes to be notified of events that
   could cause can-swap bits to change (i.e., rendering completion,
   asynchronous buffer-swap completion, etc.)?  Signals from the kernel?
   Polling "age" variables?
I'd lean towards signals, myself, though then that leads to possible
problems with libGL using a signal which an application wants to use...  Or
would it be capable of defining new signals?  (I'm not up to speed on how
that part of the kernel works.  Would it just be as simple as adding a new
value to an enumeration?)
This is a problem that I ran into very quickly when I started thinking about adding support for asynchronous buffer-swaps. I think we'd have to do something with real-time signals, but my brain refuses to remember how all that works.

4. How could the memory manager handle objects that span multiple
   blocks?  In other words, could the memory manager be made to prefer
   to swap-out blocks that wholly contain all of the objects that
   overlap the block?  Are there other useful metrics?  Prefer to
   swap-out blocks that are half full over blocks that are completely
   full?
If the AGP layer were to treat it like a VM layer and the page size were
small (say, 4K) I don't think this would be an issue.
That may not be possible. Right now the blocks are tracked in the SAREA, and that puts an upper limit on the number of block available. On a 64MB memory region, the current memory manager ends up with 64KB blocks, IIRC. As memories get bigger (both on-card and AGP apertures), the blocks will get bigger. Also right now each block only requires 4 bytes in the SAREA. Any changes that would be made for a new memory manager would make each block require more space, thereby reducing the number of blocks that could fit in the SAREA.

Even if we increase the size of the SAREA, a system with 128MB of on-card memory and 128MB AGP aperture would require ~65000 blocks (if each block covered 4KB).



-------------------------------------------------------
This SF.NET email is sponsored by: Thawte.com - A 128-bit supercerts will
allow you to extend the highest allowed 128 bit encryption to all your clients even if they use browsers that are limited to 40 bit encryption. http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0030en
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to