Michel Dänzer wrote:
> On Mon, 2002-06-03 at 23:39, Linus Torvalds wrote:
> 
>>
>>On Fri, 31 May 2002, Keith Whitwell wrote:
>>
>>>Also note that it actually asks for the pixcache to be flushed *twice* - once
>>>by RADEON_PURGE_CACHE (which writes the RADEON_RB2D_DSTCACHE_CTLSTAT via the
>>>ring) and once in radeon_do_pixcache_flush() which writes the register via MMIO.
>>>
>>Btw, why _do_ we flush these things anyway?
>>
>>I realize that we need to flush the engine in between switching from a DRI
>>application and the X server itself, but that should be part of the lock
>>transfer, should it not? Right now the X server seems to (quite
>>unnecessarily) call in to radeon_cp_idle() all the time, even if no direct
>>rendering is happening at the same time.
>>
> 
> I think that's XAA syncing for direct framebuffer access. I don't see
> any direct RADEONCPWaitForIdle() calls in the driver anyway. I also
> think that the places where info->accel->NeedToSync is set are
> justified, they're to tell XAA it needs to sync before direct
> framebuffer access.

Yes, that would be the reason for the X server calling into that code.

> Now I wonder if the various RADEON_WAIT_UNTIL_IDLE() and similar calls
> are actually necessary, especially in RADEONLeaveServer(). This way the
> chip goes idle before processing commands from a DRI client, which
> doesn't look terribly effective to me and shouldn't be necessary with
> the CP ring?

It does look unnecessary in a lot of cases.

It's important to realize that these WAIT_UNTIL commands don't cause the X 
server to wait - they are instructions to the hardware to flush it's pipes 
before processing further commands.  They just go on the ring (actually an 
indirect buffer) and are left there.

It's only if you then wait for the engine to go idle by polling registers that 
you actually force a process to wait -- as it is they slow down the graphics 
engine but not (directly) the processes on the host computer.

The reason they are there (I guess) is to enforce synchronization between 2d & 
3d rendering which (presumably) aren't synchronized by the hardware -- 
typically they are separate subsystems & don't talk to each other at all.

So, the synchronize steps (may) be necssary in the mixed 2d/3d case, but in 
pure 2d rendering, they are a waste.


> 
>>Am I missing something? Would it not be better to handle this in the DRM
>>locking code, and only flush when the lock is taken by somebody new? That
>>way, if there isn't any lock contention, there also won't be any
>>unnecessary flushes..

That would be a better way of handling the 2d/3d transition.

>>(By "in the locking code", I don't actually mean the kernel itself: just
>>make the kernel return a different return code for "successfully got the
>>lock, previous owner was yourself" than for "successfully got the lock,
>>you were the last user", and then the X server and the DRI layer can avoid
>>doing things like RADEONEngineRestore() if we're the same user).
>>
> 
> Doesn't sound bad. :) Except that I suspect a lot of the flushes are
> really inevitable for the above reasons.

Yes, they will probably be swamped by the number of direct-framebuffer-access 
flushes.  One reason for this is the 2d acceleration is still relatively weak 
for the radeon.  It would be nice to have image writes and a 'catchall' 
fallback path that at least goes via dma rather than hitting the framebuffer 
directly.  Then there's the render extension...

Keith







_______________________________________________________________

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas - 
http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink

_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to