2010/3/2 Thomas Hellström <tho...@shipmail.org>:
> Michel Dänzer wrote:
>>
>> On Tue, 2010-03-02 at 00:32 +0200, Pauli Nieminen wrote:
>>>
>>> bo allocation is still too expensive operation for dma buffers in
>>> classic mesa. It takes quite a lot cpu time in bind memory (agp
>>> system) and ttm_mem_global functions. Would it be possible to move
>>> some parts those expensive operations into pool fill and pool free
>>> code?
>>>
>>
>> Maybe we need userspace BO sub-allocation and/or caching. At least for
>> the 'DMA' buffers it should be simple for userspace to keep a round
>> robin list of buffers.
>>
>>
>>
>
> Yeah, that's indeed one of the solutions.
> Drawback is that there is no way for process 2 to empty process 1's bo cache
> in situations of memory shortage, although I guess there are ways to send
> "empty cache" events to user-space if needed.
>
> The other solution is kernel bo sub-allocation and caching, but then one has
> to live with the overhead of page-faults and vma setups / teardowns with
> associated tlb flushes.
>
> /Thomas

There is already round-robin list of (suballocated) buffers and that
works well enough. Only problem with current implementation is that
freeing of buffers is done very lazily to avoid any need for
reallocation. But extra memory use from GTT looks small price to pay
for avoiding cost of allocating bos.

When I tested to reduce number of buffers held by mesa I was hoping to
lose very little performance to reduce number of empty buffers held by
mesa. But that was a bit too much hoped for.

But at least this looks like enough for others components reallocating
buffers often. There is temporary bo allocations in ddx. Kernel pool
allocation reduces allocation cost enough to match what cost was in
UMS. That means 20-40 times better allocation performance for uc/wc
pages.

Pauli

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to