Jose,

In reading this it just occurred to me what the flaw in my code was.  I
was setting up the descriptor table for a new pass _before_ waiting for
the last one to complete, so there was a race condition.  If I wait for
idle _before_, I get no lockups, but the framerate drops.  So here's what
I'm going to do: setup ring pointers so that I start the descriptors for a
new pass where the last one ended.  Then I'll start building the new
descriptors, wrapping if necessary, and wait for idle if I hit the start
of the ring (the start of the last pass).  I'll let you know how it goes.
I'm sure this method could be refined, but if it works, things should be 
stable and I can check it in.

Leif

On Thu, 16 May 2002, José Fonseca wrote:

> On 2002.05.15 00:33 José Fonseca wrote:
> > ... I still have to workout more details:
> > 
> >   a) check if there is no other buffering besides the FIFO going on. 
> > This can only be checked by making a full proof of concept example and 
> > check if nothing goes wrong.
> > 
> 
> I used one DMA buffer (as I don't know much of the kernel API) to hold the 
> history of a register that I choose during the bus master operation. Here 
> you can see BM_GUI_TABLE progressing:
> 
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050000
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050010
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050020
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050030
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050040
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050050
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050060
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050070
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050080
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050090
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x000500a0
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x000500b0
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x000500c0
> ...
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x000503e0
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x000503f0
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050400
> May 16 20:06:00 localhost kernel: [drm]  REG = 0x00050410
> 
> Although you just see one line per value in reality there are several (I 
> had to filter duplicates when printing to avoid overflow the system log) 
> and each buffer is just 24 bytes. Nevertheless this doesn't mean that 
> there is no buffering when reading the descriptor table. I'm gonna devise 
> a test for that: it will monitor the BM_GUI_TABLE value and change the 
> value at the last moment (like waiting for the train to come to cross the 
> line!)
> 
> >   b) see if the descriptor table can be made into a circular buffer. The 
> > specs mention something about this but they aren't clear. They say the 
> > circular buffer is in the card memory, but if the card was copying the 
> > whole buffer then test 3 couldn't be happening...
> > 
> 
> I've allocated a 32KB buffer and put a continuation entry just before the 
> 16KB boundary and two final entries: one right above the 16KB boundary and 
> another right in the beginning of the table. Here you can see BM_GUI_TABLE 
> looping around the 16k boundary! ;-)
> 
> May 16 20:58:01 localhost kernel: [drm]  REG = 0x00093ff0
> May 16 20:58:01 localhost kernel: [drm]  REG = 0x00090000
> 
> >   c) instead of using a GUI register it's probably better to use 
> > END_OF_LIST_STATUS@BM_COMMAND to see if the card is processing the last 
> > entry of the descriptor table. If that bit is set then there is no point 
> > in adding to the table was the engine will surely stop. We'll still need 
> > the buffer aging register to resolve the race condition of the engine 
> > stops while we change the table.
> > 
> 
> Here you see BM_COMMAND
> 
> May 16 20:07:52 localhost kernel: [drm]  REG = 0xc0000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000010
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> ...
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0x40000000
> May 16 20:07:52 localhost kernel: [drm]  REG = 0xc0000018
> May 16 20:07:52 localhost kernel: [drm]  REG = 0xc0000000
> 
> and the last two lines, END_OF_LIST_STATUS@BM_COMMAND being set! ;-)
> 
> NOTE: For those who don't have the specs, the 
> END_OF_LIST_STATUS@BM_COMMAND is bit 31.
> 
> You can also note the size of the buffer 0x18=24 bytes. I think that zeros 
> correspond to the instants where the chip is reading a new entry.
> The first line where the END_OF_LIST_STATUS@BM_COMMAND is set corresponds 
> to the moment where the bus mastering engine is still initializing.
> 
> I think that the reason for all this fine detail is that this is being run 
> on Pentium Celeron-III at 700MHz.
> 
> > ...
> 
> 
> As I said above I still gonna make a final test before implementing these 
> ideas, but I can't stop thinking that it's just too much coincidence, 
> especially the ability of turning the table descriptor into a circular 
> buffer which means that it was expected that one added new entries instead 
> of redoing the table, so I'm getting convinced that the chip was designed 
> to be able to do this from the beginning.
> 
> Well, if I get bad news you're be the first to know.
> 
> José Fonseca
> 
> _______________________________________________________________
> 
> Have big pipes? SourceForge.net is looking for download mirrors. We supply
> the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
> _______________________________________________
> Dri-devel mailing list
> [EMAIL PROTECTED]
> https://lists.sourceforge.net/lists/listinfo/dri-devel
> 

-- 
Leif Delgass 
http://www.retinalburn.net




_______________________________________________________________

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to