On Mon, Apr 30, 2018 at 8:31 AM, Jameson Miller <jam...@microsoft.com> wrote:
> Adds the following functionality to memory pools:
>
>  - Lifecycle management functions (init, discard)
>
>  - Test whether a memory location is part of the managed pool
>
>  - Function to combine 2 pools
>
> This also adds logic to track all memory allocations made by a memory
> pool.
>
> These functions will be used in a future commit in this commit series.
>
> Signed-off-by: Jameson Miller <jam...@microsoft.com>
> ---
>  mem-pool.c | 114 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
>  mem-pool.h |  32 +++++++++++++++++

> diff --git a/mem-pool.h b/mem-pool.h
> index 829ad58ecf..34df4fa709 100644
> --- a/mem-pool.h
> +++ b/mem-pool.h
> @@ -19,8 +19,27 @@ struct mem_pool {
>
>         /* The total amount of memory allocated by the pool. */
>         size_t pool_alloc;
> +
> +       /*
> +        * Array of pointers to "custom size" memory allocations.
> +        * This is used for "large" memory allocations.
> +        * The *_end variables are used to track the range of memory
> +        * allocated.
> +        */
> +       void **custom, **custom_end;
> +       int nr, nr_end, alloc, alloc_end;
>  };

What is the design goal of this mem pool?
What is it really good at, which patterns of use should we avoid?

It looks like internally the mem-pool can either use mp_blocks
that are stored as a linked list, or it can have custom allocations
stored in an array.

Is the linked list or the custom part sorted by some key?
Does it need to be sorted?

I am currently looking at alloc.c, which is really good for
allocating memory for equally sized parts, i.e. it is very efficient
at providing memory for fixed sized structs. And on top of that
it is not tracking any memory as it relies on program termination
for cleanup.

This memory pool seems to be optimized for allocations of
varying sizes, some of them huge (to be stored in the custom
part) and most of them rather small as they go into the mp_blocks?

Thanks,
Stefan

Reply via email to