> -----Original Message-----
> From: Christoph Hellwig [mailto:[email protected]]
> Sent: Thursday, July 23, 2020 2:30 AM
> To: Song Bao Hua (Barry Song) <[email protected]>
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; Linuxarm
> <[email protected]>; [email protected];
> [email protected]; Jonathan Cameron
> <[email protected]>; Nicolas Saenz Julienne
> <[email protected]>; Steve Capper <[email protected]>; Andrew
> Morton <[email protected]>; Mike Rapoport <[email protected]>
> Subject: Re: [PATCH v3 1/2] dma-direct: provide the ability to reserve
> per-numa CMA
> 
+cc Prime and Daode who are interested in this patchset.

> On Sun, Jun 28, 2020 at 11:12:50PM +1200, Barry Song wrote:
> >  struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t
> gfp)
> >  {
> >     size_t count = size >> PAGE_SHIFT;
> >     struct page *page = NULL;
> >     struct cma *cma = NULL;
> > +   int nid = dev ? dev_to_node(dev) : NUMA_NO_NODE;
> > +   bool alloc_from_pernuma = false;
> > +
> > +   if ((count <= 1) && !(dev && dev->cma_area))
> > +           return NULL;
> >
> >     if (dev && dev->cma_area)
> >             cma = dev->cma_area;
> > -   else if (count > 1)
> > +   else if ((nid != NUMA_NO_NODE) &&
> dma_contiguous_pernuma_area[nid]
> > +           && !(gfp & (GFP_DMA | GFP_DMA32))) {
> > +           cma = dma_contiguous_pernuma_area[nid];
> > +           alloc_from_pernuma = true;
> > +   } else {
> >             cma = dma_contiguous_default_area;
> > +   }
> 
> I find the function rather confusing now.  What about something
> like (this relies on the fact that dev should never be NULL in the
> DMA API)
> 
> struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp)
> {
>       size_t cma_align = min_t(size_t, get_order(size),
> CONFIG_CMA_ALIGNMENT);
>       size_t count = size >> PAGE_SHIFT;
> 
>       if (gfpflags_allow_blocking(gfp))
>               return NULL;
>       gfp &= __GFP_NOWARN;
> 
>       if (dev->cma_area)

I got a kernel robot warning which said dev should be checked before being 
accessed
when I did a similar change in v1. Probably it was an invalid warning if dev 
should
never be null.

>               return cma_alloc(dev->cma_area, count, cma_align, gfp);
>       if (count <= 1)
>               return NULL;
> 
>       if (IS_ENABLED(CONFIG_PERNODE_CMA) && !(gfp & (GFP_DMA |
> GFP_DMA32)) {
>               int nid = dev_to_node(dev);
>               struct cma *cma = dma_contiguous_pernuma_area[nid];
>               struct page *page;
> 
>               if (cma) {
>                       page = cma_alloc(cma, count, cma_align, gfp);
>                       if (page)
>                               return page;
>               }
>       }
> 
>       return cma_alloc(dma_contiguous_default_area, count, cma_align, gfp);
> }

Yes, it looks much better.

> 
> > +           /*
> > +            * otherwise, page is from either per-numa cma or default cma
> > +            */
> > +           int nid = page_to_nid(page);
> > +
> > +           if (nid != NUMA_NO_NODE) {
> > +                   if (cma_release(dma_contiguous_pernuma_area[nid], page,
> > +                                           PAGE_ALIGN(size) >> PAGE_SHIFT))
> > +                           return;
> > +           }
> > +
> > +           if (cma_release(dma_contiguous_default_area, page,
> 
> How can page_to_nid ever return NUMA_NO_NODE?

I thought page_to_nid would return NUMA_NO_NODE if CONFIG_NUMA is
not enabled. Probably I was wrong. Will get it fixed in v4.

Thanks
Barry

Reply via email to