Keith Packard wrote:
> On Fri, 2007-12-07 at 11:15 +0000, Keith Whitwell wrote:
>> Keith,
>>
>> Thomas has just left for two weeks of (well deserved!) holiday, so he 
>> may be slow to respond.
> 
> Thanks for taking the time to have a look while he's away; we're
> finishing up the 965 TTM work, and it is posing some challenges with the
> existing kernel interface.
> 
>> In the meantime, have you considered how this will interact with 
>> userspace buffer pools?
> 
> No, I hadn't considered that as we're not considering a two-level
> allocation strategy at this point.
> 
> However, if you consider the blocking patch in conjunction with the
> presumed_offset optimization, I think you'll find that userspace buffer
> pools will not actually be affected negatively by this change.
> 
> The presumed_offset optimization allows the application to compute all
> relocations itself for target buffers which have been mapped to the
> hardware. The kernel relocations are purely a back-up, for cases where
> buffers move between EXECBUFFER invocations.
> 
>>   I know you guys aren't using them at this 
>> point, but I'm of the opinion that they are an important facility which 
>> needs to be preserved.  At worst it may be that some additional flag is 
>> needed to control this behaviour.
> 
> We could do this, but I believe this would actually require more
> blocking by the client -- it doesn't know when objects are moving in the
> kernel, so it doesn't know when relocation data will need to be
> rewritten.
> 
>> Secondly I wonder whether this isn't already caught by other aspects of 
>> the buffer manager behaviour?
> 
>> ie, if the buffer to which the relocation points to is being moved, 
>> doesn't that imply all hardware activity related to that buffer must 
>> have concluded?  IE, if the buffer itself is free to move, surely all 
>> commands containing relocations (or chains of relocations) which point 
>> to the buffer must themselves have completed??
> 
> Yes, if the target buffer is moving, then the operation related to the
> relocatee will have been completed and waited for. But, re-writing
> relocations doesn't require that the buffers have moved. 
> 
> Consider the case of the binding table on 965 which points at surface
> state structures. Executing a command that uses the binding table will
> require that relocations be evaluated for the entries in the table; even
> if nothing moves (ignoring my presumed_offset optimization), those
> relocations will need to be evaluated and the surface state pointers
> stored to the binding table.
> 
> For the application to guarantee that the binding table relocations can
> be written without the kernel needing to wait for the binding table
> buffer to be idle, the application would have to wait every time, not
> just when the buffer actually moves.

OK, it sounds like you're talking about situations where the driver is 
modifying state in buffers *only* through changes to the relocations?

It's probably not surprising the fence is not implemented as I'd 
normally think that those relocation changes would be associated with 
some changes to the other data, and that would imply mapping the buffer 
(and hence the wait).  I do understand the examples though and can see 
where you're trying to take this.

Anyway, I'm hopeful that this won't break other usages...

Keith

-------------------------------------------------------------------------
SF.Net email is sponsored by: 
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to