Pauli Nieminen wrote:
> 2010/3/19 Thomas Hellström <tho...@shipmail.org>:
>   
>> Pauli, Dave and Jerome,
>>
>> Before reviewing this, Could you describe a bit how this interfaces with the
>> TTM memory accounting. It's important for some systems to be able to set a
>> limit beyond which TTM may not pin any pages.
>>
>> Am I right in assuming that TTM memory accounting is kicked in only when TTM
>> allocs and frees pages from the pool?
>>     
>
> yes.
>
> TTM memory accounting is still handled in ttm_tt.c so pool is outside
> of it. But I can move calls to memory accounting into pool if that is
> preferred place.
>
> With current implementation pool can hold about 512 pages more memory
> than what is the TTM limit.
>
>   

In that case I think we should keep the pool outside of TTM memory 
accounting.
Since the system can (or should be able to) reclaim the pool pages 
through the shrink mechanism, the
pool is sort of transparent from a memory accounting point of view 
although shrinking comes with a performance penalty.

>> Can the system reclaim *all* pages not used by TTM through a shrink
>> mechanism?
>>     
>
> Not with current version but I can modify patch so that system can
> reclaim all pages. Current lowest limit is 16 pages in pool because
> that avoids refills for 2D only desktop use.
>
> The limit is already changed in runtime so making it scale to zero
> sized pool is only minor change. But what should happen for pool
> refill if system just before forced the pool size to zero?
>   
16 pages sounds OK to me.

Perhaps we'd want to be able to change both this limit and the upper 
pool size limit from a sysfs interface, within
ttm/memory_accounting ?
>   
>> In the long run, I'd like to have a pool of non-kernel-mapped pages instead
>> of a pool of uncached / write-combined pages, because then we'd have quite
>> fast transition from write-combined to write-back, but I guess that will be
>> something for the future.
>>     
>
> I think this can be simulated with multiple pools if free logic is
> changed from handling single pool at a time to combine multiple pools
> to a single wb transition operation.
> Trouble in making very large cache transition operations is that
> allocating large continues arrays in kernel is problematic. Current
> code is limiting the size of single cache transition operation to
> avoid possible memory allocation problems.
>
>   
I think that's a good strategy.

However, the use-case I was thinking of was not fast freeing, but fast 
reading of GPU memory, like so:

*) Unbind from AGP.
*) Mark pages write-back() while the kernel map is still 'unmapped' 
(just like highmem pages).
*) Read
*) Cache flush pages and Mark pages write-combined(), while the kernel 
map is still 'unmapped'.

The operation will not involve any 'set_pages_x' , because the kernel 
map is not affected.
However, there is no exported api to unmap pages from the linear kernel 
map (yet).

/Thomas

>> /Thomas
>>
>>
>>
>>
>> Pauli Nieminen wrote:
>>     
>>> When allocating wc/uc pages cache state transition requires cache flush
>>> which
>>> is expensive operation. To avoid cache flushes allocation of wc/uc pages
>>> should
>>> be done in large groups when only single cache flush is required for whole
>>> group
>>> of pages.
>>>
>>> In some cases drivers need t oallocate and deallocate many pages in a
>>> short time
>>> frame. In this case we can avoid cache flushes if we keep pages in the
>>> pool before
>>> actually freeing them later.
>>>
>>> arch/x86 was missing set_pages_array_wc and set_memory_array_wc. Patch 6
>>> and 7 add
>>> missing functions and hooks set_pages_array_wc to the pool allocator.
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Download Intel&#174; Parallel Studio Eval
>>> Try the new software tools for yourself. Speed compiling, find bugs
>>> proactively, and fine-tune applications for parallel performance.
>>> See why Intel Parallel Studio got high marks during beta.
>>> http://p.sf.net/sfu/intel-sw-dev
>>> --
>>> _______________________________________________
>>> Dri-devel mailing list
>>> Dri-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/dri-devel
>>>
>>>       
>>
>>
>>     




------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to