On 06/20/2014 04:30 PM, Volkin, Bradley D wrote:
On Fri, Jun 20, 2014 at 06:25:56AM -0700, Tvrtko Ursulin wrote:

On 06/19/2014 06:35 PM, Volkin, Bradley D wrote:
On Thu, Jun 19, 2014 at 02:48:29AM -0700, Tvrtko Ursulin wrote:

Hi Brad,

On 06/18/2014 05:36 PM, bradley.d.vol...@intel.com wrote:
From: Brad Volkin <bradley.d.vol...@intel.com>

This adds a small module for managing a pool of batch buffers.
The only current use case is for the command parser, as described
in the kerneldoc in the patch. The code is simple, but separating
it out makes it easier to change the underlying algorithms and to
extend to future use cases should they arise.

The interface is simple: alloc to create an empty pool, free to
clean it up; get to obtain a new buffer, put to return it to the
pool. Note that all buffers must be returned to the pool before
freeing it.

The pool has a maximum number of buffers allowed due to some tests
(e.g. gem_exec_nop) creating a very large number of buffers (e.g.
___). Buffers are purgeable while in the pool, but not explicitly
truncated in order to avoid overhead during execbuf.

Locking is currently based on the caller holding the struct_mutex.
We already do that in the places where we will use the batch pool
for the command parser.

Signed-off-by: Brad Volkin <bradley.d.vol...@intel.com>
---

r.e. pool capacity
My original testing showed something like thousands of buffers in
the pool after a gem_exec_nop run. But when I reran with the max
check disabled just now to get an actual number for the commit
message, the number was more like 130.  I developed and tested the
changes incrementally, and suspect that the original run was before
I implemented the actual copy operation. So I'm inclined to remove
or at least increase the cap in the final version. Thoughts?

Some random thoughts:

Is it strictly necessary to cap the pool size? I ask because it seems to
be introducing a limit where so far there wasn't an explicit one.

No, I only added it because there were a huge number of buffers in the
pool at one point. But that seems to have been an artifact of my
development process, so unless someone says they really want to keep
the cap, I'm going to drop it in the next rev.

Cap or no cap (I am for no cap), but the pool is still "grow only" at
the moment, no? So one allocation storm and objects on the pool inactive
list end up wasting memory forever.

Oh, so what happens is that when you put() an object back in the pool, we
set obj->madv = I915_MADV_DONTNEED, which should tell the shrinker that it
can drop the backing storage for the object if we need space. When you get()
an object, we set obj->madv = I915_MADV_WILLNEED and get new backing pages.
So the number of objects grows (capped or not), but the memory used can be
controlled.

Education time for me I see. :)

So the object is in pool->inactive_list _and_ in some other list so shrinker can find it?

Thanks,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to