On 1/25/24 22:12, Alexandru Elisei wrote:
> Memory is added to CMA with cma_declare_contiguous_nid() and
> cma_init_reserved_mem(). This memory is then put on the MIGRATE_CMA list in
> cma_init_reserved_areas(), where the page allocator can make use of it.
cma_declare_contiguous_nid() reserves memory in memblock and marks the
for subsequent CMA usage, where as cma_init_reserved_areas() activates
these memory areas through init_cma_reserved_pageblock(). Standard page
allocator only receives these memory via free_reserved_page() - only if
the page block activation fails.
>
> If a device manages multiple CMA areas, and there's an error when one of
> the areas is added to CMA, there is no mechanism for the device to prevent
What kind of error ? init_cma_reserved_pageblock() fails ? But that will
not happen until cma_init_reserved_areas().
> the rest of the areas, which were added before the error occured, from
> being later added to the MIGRATE_CMA list.
Why is this mechanism required ? cma_init_reserved_areas() scans over all
CMA areas and try and activate each of them sequentially. Why is not this
sufficient ?
>
> Add cma_remove_mem() which allows a previously reserved CMA area to be
> removed and thus it cannot be used by the page allocator.
Successfully activated CMA areas do not get used by the buddy allocator.
>
> Signed-off-by: Alexandru Elisei <alexandru.eli...@arm.com>
> ---
>
> Changes since rfc v2:
>
> * New patch.
>
> include/linux/cma.h | 1 +
> mm/cma.c | 30 +++++++++++++++++++++++++++++-
> 2 files changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index e32559da6942..787cbec1702e 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -48,6 +48,7 @@ extern int cma_init_reserved_mem(phys_addr_t base,
> phys_addr_t size,
> unsigned int order_per_bit,
> const char *name,
> struct cma **res_cma);
> +extern void cma_remove_mem(struct cma **res_cma);
> extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned
> int align,
> bool no_warn);
> extern int cma_alloc_range(struct cma *cma, unsigned long start, unsigned
> long count,
> diff --git a/mm/cma.c b/mm/cma.c
> index 4a0f68b9443b..2881bab12b01 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -147,8 +147,12 @@ static int __init cma_init_reserved_areas(void)
> {
> int i;
>
> - for (i = 0; i < cma_area_count; i++)
> + for (i = 0; i < cma_area_count; i++) {
> + /* Region was removed. */
> + if (!cma_areas[i].count)
> + continue;
Skip previously added CMA area (now zeroed out) ?
> cma_activate_area(&cma_areas[i]);
> + }
>
> return 0;
> }
cma_init_reserved_areas() gets called via core_initcall(). Some how
platform/device needs to call cma_remove_mem() before core_initcall()
gets called ? This might be time sensitive.
> @@ -216,6 +220,30 @@ int __init cma_init_reserved_mem(phys_addr_t base,
> phys_addr_t size,
> return 0;
> }
>
> +/**
> + * cma_remove_mem() - remove cma area
> + * @res_cma: Pointer to the cma region.
> + *
> + * This function removes a cma region created with cma_init_reserved_mem().
> The
> + * ->count is set to 0.
> + */
> +void __init cma_remove_mem(struct cma **res_cma)
> +{
> + struct cma *cma;
> +
> + if (WARN_ON_ONCE(!res_cma || !(*res_cma)))
> + return;
> +
> + cma = *res_cma;
> + if (WARN_ON_ONCE(!cma->count))
> + return;
> +
> + totalcma_pages -= cma->count;
> + cma->count = 0;
> +
> + *res_cma = NULL;
> +}
> +
> /**
> * cma_declare_contiguous_nid() - reserve custom contiguous area
> * @base: Base address of the reserved area optional, use 0 for any
But first please do explain what are the errors device or platform might
see on a previously marked CMA area so that removing them on way becomes
necessary preventing their activation via cma_init_reserved_areas().