HTML mail ... otherwise looks reasonable. See some comments under (for the 
future API spec for the same feature).



From: Christophe Milard [mailto:christophe.mil...@linaro.org] 
Sent: Thursday, December 22, 2016 11:24 AM
To: Savolainen, Petri (Nokia - FI/Espoo) 
<petri.savolai...@nokia-bell-labs.com>; Mike Holmes <mike.hol...@linaro.org>; 
Bill Fischofer <bill.fischo...@linaro.org>; LNG ODP Mailman List 
<lng-odp@lists.linaro.org>; Yi He <yi...@linaro.org>
Subject: api for small buffer allocations

Hi,

I am trying to sum up what was said at the arch call yesterday, regarding 
memory allocation to figure out how to rewrite my "buddy allocator for drv 
interface" patch series.

- Some voices seemed to say that a single allocation would be best, regardless 
of the size being allocated (mostly at the beginning of the call). I objected 
that the current shm_reserve() interface has an overhead of about 64 bytes per 
allocated block, and probably, the availability of the shm_lookup function is 
not of much interest for most minimal allocations. Moreover, the 
handle-to-address indirection is not of much use if the only thing that matters 
is the address...

-Towards the end of the call, It seemed the usage of new allocation functions 
was less a problem, but that Petri wanted each memory pool users to be able to 
create their own pools rather than using a predefined (common) pool.  Also the 
need for supporting fix element size allocation (slab rather than buddy) was 
required.

So here comes a alternative approach. I would like to get Petri's (obviously 
needed) blessing on this drv API, before its final implementation. 

/*create a pool: elmnt_sz may be zero, meaning unknown size.
 * This will create a pool by internally performing a _ishm_reserve()
 * of requested pool_size (or more, e.g. nearest power of 2) + needed 
 * control data.
 * for linux-gen a size zero element would create a pool with buddy
 * allocation and a fixed (>0) element size would give a slab pool*/
odpdrv_shm_pool_t odpdrv_shm_mempool_create(const char *name, uint64_t pool_sz, 
uint32_t elmnt_sz);


>>
>> elmnt_sz is actually maximum size for the alloc(). So I'd prefer that name 
>> (max_alloc_size) and documentation.
>> I think both sizes should be given. Pool_size would be a sum of all 
>> (simultaneous) alloc(size).
>>


/* destroy a pool */
int odpdrv_shm_pool_t odpdrv_shm_mempool_destroy(odpdrv_shm_pool_t pool);

/* search for an existing pool
 * return the pool or invalid on failure */
odpdrv_shm_pool_t odpdrv_shm_mempool_lookup(const char *name)

/* allocate memory from a pool:
 * if the pool was created with an element size (for linux-gen:slab),
 * then the size provided here is checked to be less or equal to
 * the element size provided at pool creation time,
 * otherwise, an element of at least size bytes is allocateded (buddy)
 * returns NULL on error
 */
void *odpdrv_shm_pool_alloc(odpdrv_shm_pool_t pool, uint32_t size);

>>
>> size <= max_alloc_size above, otherwise results are undefined.
>>
>> returns NULL if sum of all alloc() sizes exceed pool_sz above == pool is 
>> empty.
>>
>> -Petri
>>


/* free memory from a pool: returns non-zero on error*/
int odpdrv_shm_pool_free(odpdrv_shm_pool_t pool, void *addr);

The API above matches what was my feeling was required when the arch call 
ended.... Does this match your views?
Any hope for that to get in if I code it?

Thanks for your answers,

Christophe.

Reply via email to