On 8/25/06, Timothy Miller <[EMAIL PROTECTED]> wrote:
> The GART hardware is almost always used since the CPU paging hardware
> has scrambled any images in system RAM. Resolving this scrambling is
> why normal DMA is seldom used.
I have two suggestions to solve this for us:
(1) For the ring buffer (which doesn't need to he very large anyhow),
if we want multiple pages, we'll just design the hardware to handle
multiple base addresses that we stitch together.
(2) For indirect buffers, software is responsible for dealing with
page boundaries and submitting commands in multiple blocks (which
translate into multiple commands queued up in the ring buffer).
That is going to be hard to program. GART memory is easy to use, just
ask the kernel for a chunk of it. It will appear to the app as
contiguous memory since the kernel GART driver will set up the page
tables in the app.
Now the app does something with it. The kernel graphics driver has to
turn the GART region address into internal GPU address space. But the
drivers are smart and mapped the GART region to the same address in
internal space as it is in GART space. So you just copy the address
into the GPU command. Worst case you add a fixed offset to it.
Nobody but the kernel GART driver had to deal with the pages being
scattered everywhere.
You have to ask the kernel GART driver to allocate the RAM for you
since it needs to be marked non-pagable and non-cachable. Normal app
memory is both pagable and cachable.
Access to this memory is not that bad since it has a MTRR set for
write combining. You mostly write to it and not read from it.
--
Jon Smirl
[EMAIL PROTECTED]
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)