On Wed, Sep 02, 2020 at 08:02:04PM -0700, Mike Kravetz wrote: > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -383,25 +383,34 @@ postcore_initcall(atomic_pool_init); > struct dma_contig_early_reserve { > phys_addr_t base; > unsigned long size; > + struct list_head areas; > }; > > +static __initdata LIST_HEAD(dma_mmu_remap_areas); > > void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long size) > { > + struct dma_contig_early_reserve *d; > + > + d = memblock_alloc(sizeof(struct dma_contig_early_reserve), > + sizeof(void *)); > + if (!d) { > + pr_err("Unable to allocate dma_contig_early_reserve struct!\n"); > + return; > + } > + > + d->base = base; > + d->size = size; > + list_add_tail(&d->areas, &dma_mmu_remap_areas); > }
I wonder if struct cma should grow a flags or type field, so that the arm code can simply use cma_for_each_area to iterate the CMA areas for the DMA fixup, and we can remove the extra list and the magic hook. > +/* modify here */ > +LIST_HEAD(cma_areas); What does this comment mean? > +static unsigned int cma_area_count; It seems this is only used to provide a default name for the CMA areas, but all areas actually provide a name, so I think we can drop the default naming and the cma_area_count variable entirely. > if (!size || !memblock_is_region_reserved(base, size)) > return -EINVAL; > > + > /* ensure minimal alignment required by mm core */ This adds a spurious empty line. > static int __init cma_debugfs_init(void) > { > struct dentry *cma_debugfs_root; > - int i; > + struct cma *c; > > cma_debugfs_root = debugfs_create_dir("cma", NULL); > > - for (i = 0; i < cma_area_count; i++) > - cma_debugfs_add_one(&cma_areas[i], cma_debugfs_root); > + list_for_each_entry(c, &cma_areas, areas) > + cma_debugfs_add_one(c, cma_debugfs_root); I think this should use cma_for_each_area, that way cma_areas can be keep static in cma.c.