> -----Original Message-----
> From: Christophe Milard [mailto:christophe.mil...@linaro.org]
> Sent: Friday, December 23, 2016 3:36 PM
> To: Savolainen, Petri (Nokia - FI/Espoo) <petri.savolainen@nokia-bell-
> labs.com>
> Subject: Re: api for small buffer allocations
> 
> On 23 December 2016 at 11:56, Savolainen, Petri (Nokia - FI/Espoo)
> <petri.savolai...@nokia-bell-labs.com> wrote:
> >
> >
> >> -----Original Message-----
> >> From: Christophe Milard [mailto:christophe.mil...@linaro.org]
> >> Sent: Friday, December 23, 2016 10:57 AM
> >> To: Savolainen, Petri (Nokia - FI/Espoo) <petri.savolainen@nokia-bell-
> >> labs.com>
> >> Cc: Mike Holmes <mike.hol...@linaro.org>; Bill Fischofer
> >> <bill.fischo...@linaro.org>; LNG ODP Mailman List <lng-
> >> o...@lists.linaro.org>; Yi He <yi...@linaro.org>
> >> Subject: Re: api for small buffer allocations
> >>
> >> On 23 December 2016 at 09:15, Savolainen, Petri (Nokia - FI/Espoo)
> >> <petri.savolai...@nokia-bell-labs.com> wrote:
> >> >
> >> >
> >> >> -----Original Message-----
> >> >> From: Christophe Milard [mailto:christophe.mil...@linaro.org]
> >> >> Sent: Thursday, December 22, 2016 3:13 PM
> >> >> To: Savolainen, Petri (Nokia - FI/Espoo) <petri.savolainen@nokia-
> bell-
> >> >> labs.com>
> >> >> Cc: Mike Holmes <mike.hol...@linaro.org>; Bill Fischofer
> >> >> <bill.fischo...@linaro.org>; LNG ODP Mailman List <lng-
> >> >> o...@lists.linaro.org>; Yi He <yi...@linaro.org>
> >> >> Subject: Re: api for small buffer allocations
> >> >>
> >> >> On 22 December 2016 at 13:49, Savolainen, Petri (Nokia - FI/Espoo)
> >> >> <petri.savolai...@nokia-bell-labs.com> wrote:
> >> >> >
> >> >> > HTML mail ... otherwise looks reasonable. See some comments under
> >> (for
> >> >> the future API spec for the same feature).
> >> >>
> >> >> Sorry for that, Obviously google feels it should revert to HTML from
> >> >> time to time.
> >> >>
> >> >> >
> >> >> >
> >> >> >
> >> >> > From: Christophe Milard [mailto:christophe.mil...@linaro.org]
> >> >> > Sent: Thursday, December 22, 2016 11:24 AM
> >> >> > To: Savolainen, Petri (Nokia - FI/Espoo) <petri.savolainen@nokia-
> >> bell-
> >> >> labs.com>; Mike Holmes <mike.hol...@linaro.org>; Bill Fischofer
> >> >> <bill.fischo...@linaro.org>; LNG ODP Mailman List <lng-
> >> >> o...@lists.linaro.org>; Yi He <yi...@linaro.org>
> >> >> > Subject: api for small buffer allocations
> >> >> >
> >> >> > Hi,
> >> >> >
> >> >> > I am trying to sum up what was said at the arch call yesterday,
> >> >> regarding memory allocation to figure out how to rewrite my "buddy
> >> >> allocator for drv interface" patch series.
> >> >> >
> >> >> > - Some voices seemed to say that a single allocation would be
> best,
> >> >> regardless of the size being allocated (mostly at the beginning of
> the
> >> >> call). I objected that the current shm_reserve() interface has an
> >> overhead
> >> >> of about 64 bytes per allocated block, and probably, the
> availability
> >> of
> >> >> the shm_lookup function is not of much interest for most minimal
> >> >> allocations. Moreover, the handle-to-address indirection is not of
> much
> >> >> use if the only thing that matters is the address...
> >> >> >
> >> >> > -Towards the end of the call, It seemed the usage of new
> allocation
> >> >> functions was less a problem, but that Petri wanted each memory pool
> >> users
> >> >> to be able to create their own pools rather than using a predefined
> >> >> (common) pool.  Also the need for supporting fix element size
> >> allocation
> >> >> (slab rather than buddy) was required.
> >> >> >
> >> >> > So here comes a alternative approach. I would like to get Petri's
> >> >> (obviously needed) blessing on this drv API, before its final
> >> >> implementation.
> >> >> >
> >> >> > /*create a pool: elmnt_sz may be zero, meaning unknown size.
> >> >> >  * This will create a pool by internally performing a
> _ishm_reserve()
> >> >> >  * of requested pool_size (or more, e.g. nearest power of 2) +
> needed
> >> >> >  * control data.
> >> >> >  * for linux-gen a size zero element would create a pool with
> buddy
> >> >> >  * allocation and a fixed (>0) element size would give a slab
> pool*/
> >> >> > odpdrv_shm_pool_t odpdrv_shm_mempool_create(const char *name,
> >> uint64_t
> >> >> pool_sz, uint32_t elmnt_sz);
> >> >> >
> >> >> >
> >> >> >>>
> >> >> >>> elmnt_sz is actually maximum size for the alloc(). So I'd prefer
> >> that
> >> >> name (max_alloc_size) and documentation.
> >> >>
> >> >> Not sure I agree. But not worth hours of discussion. Will call it
> >> >> max_alloc_size.
> >> >
> >> > @param max_size  Maximum size in bytes for alloc call
> >> >
> >> >
> >> >
> >> >>
> >> >> >>> I think both sizes should be given. Pool_size would be a sum of
> all
> >> >> (simultaneous) alloc(size).
> >> >> >>>
> >> >>
> >> >> Your comments applies for fixed size (slab) alloc. a size of 0 would
> >> >> imply random sizes (buddy). Not sure what you mean by "I think both
> >> >> sizes should be given. Pool_size would be a sum of all
> (simultaneous)
> >> >> alloc(size)".  both sizes ARE already  given in the proposed
> >> >> prototype.
> >> >> "Pool_size would be a sum of all (simultaneous) alloc(size).": ???
> for
> >> >> buddy allocation, the pools size may need to be up to 50% larger the
> >> >> sum of all allocs. For slag, Pool_size is the sum of all
> >> >> N*element_max_size where N is the number of current allocation.
> >> >> It that what you meant?
> >> >
> >> >
> >> > // Create a pool of total 520 bytes for my allocs,
> >> > // max alloc size is 302.
> >> > mempool_create("foo", 530, 302);
> >>
> >> So your "max_size", or "max_alloc_size" is 302, right? (and 530 the
> pool
> >> size)
> >> What does it mean? From the arch call I understood that both buddy
> >> allocator and fixed size (slab) allocator are wished: So what should I
> >> do on this code:
> >> mempool_create("foo", 530, 302);
> >> A buddy allocator or a fixed size allocator??
> >
> >
> > Application (or driver) would not care.
> 
> Agreed. But the implementation should be able to choose between
> diverse method (buddy, slab, HW...) from the given parameters.
> 
> >Implementation should be possible to use a HW buffer manager. Those use
> typically fixed size blocks (everything is rounded up to max_size).
> 
> yes.
> 
> >
> > We can add more parameters for the create call, so that implementation
> can calculate needed memory reservation in advance. For example,
> >
> > typedef struct {
> >         // sum of all (simultaneous) allocs
> >         uint64_t pool_size;
> >
> >         // Minimum alloc size application will request from pool
> >         uint32_t min_alloc;
> >
> >         // Maximum alloc size application will request from pool
> >         uint32_t max_alloc;
> >
> > } odp_shm_pool_param_t;
> 
> That makes more sense.
> Not sure we really need a struct for 3 values, but it could well be so
> on the north interface if you want.

With param struct it's easier to add new params in backwards compatible manner 
(if needed). Also, we use 'param' with all other create calls. 


> 
> >
> > From these implementation may calculate:
> > num_bufs   = pool_size / min_alloc;
> > buf_size   = max_alloc;
> > mem_size   = num_bufs * buf_size;
> >
> > For example, an implementation may choose to use buddy allocator if
> min_alloc is far from max_alloc, and it cares about memory foot print.
> Otherwise, fixed size (HW based) implementation would be used.
> 
> Yes. With both min and max, that becomes possible.
> 
> >
> >
> >
> >>
> >> My proposal goes like that:
> >> p1 = mempool_create("foo", 530, 30); =>fixed size (slab): (rounded to
> >> 540 bytes, i.e. 18*30)  all allocations are 30 bytes. 30 is not the
> >> max size, 30 is THE (unique) size. The fact that the user can allocate
> >> 30 bytes and only use 3 of those is not of interrest for the API.
> >> p2 = mempool_create("foo", 530, -1); =>buddy (rounded to 1K).
> >> In my case, the choice between buddy/slab is based on the fact that
> >> the element size is given (i.e. >0) or not.
> >> In your proposal, I do not understand what I would base my buddy/slab
> >> choice on. If all allocations that follows require that
> >> max_alloc_size, slab is best. If the allocations are anything between
> >> 0 and max_alloc_size, then buddy is best (and the max_alloc_size of no
> >> interrest)
> >> How would I know?
> >>
> >> Back to my proposal: (reminder: p1 is slab, p2 is buddy)
> >>
> >> alloc(p1, -1) => OK: return a pointer to 30 bytes
> >
> > This feature is not needed, as application can always tell how much it
> really needs.
> 
> As you want.
> 
> >
> >
> >> alloc(p1, 2) => OK: return a pointer to 30 bytes
> >
> > Rounding up allocation is OK. Although application must not use (or know
> about) that extra space.
> 
> yes.
> 
> >
> >
> >> alloc(p1, 30) => OK: return a pointer to 30 bytes
> >> alloc(p1, 31) => NOK: return NULL
> >> alloc(p2, 5) => OK returns a pointer to 8 bytes
> >> alloc(p2, 31) => OK returns a pointer to 32 bytes
> >>
> >> Anything exceeding the pool capacity returns NULL (we seem to agree on
> >> that)
> >
> > From application point of view pool size is the sum of allocations, not
> the actual memory reservation for the pool.
> 
> OK. As buddy allocation garantees a minimum of 50% efficiency, we can
> do like that:
> if (min_alloc == max_alloc)
>     => slab (size = pool size rounded up to ineteger nb of min_alloc)
> else
>    => buddy (next_power_of_2(2*given_pool_sz))
> 
> OK?

Seems OK. Also, the algorithm is easy to tune afterward. E.g. you could use 
slab for bit wider range - it should be faster of the two, since it's simpler.

if (min_alloc == max_alloc || min_alloc > 0.8*max_alloc)
   => slab (max 20% waste)
else
   => buddy (max 50% waste)


-Petri


Reply via email to