Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-14 Thread Nicolas Saenz Julienne
On Fri, 2020-08-14 at 08:06 +0200, Christoph Hellwig wrote:
> On Sat, Aug 08, 2020 at 08:33:54AM +0200, Christoph Hellwig wrote:
> > On Fri, Aug 07, 2020 at 10:50:19AM +0200, Nicolas Saenz Julienne wrote:
> > > On Fri, 2020-08-07 at 07:21 +0200, Christoph Hellwig wrote:
> > > > On Thu, Aug 06, 2020 at 08:47:55PM +0200, Nicolas Saenz Julienne wrote:
> > > > > There is no guarantee to CMA's placement, so allocating a zone 
> > > > > specific
> > > > > atomic pool from CMA might return memory from a completely different
> > > > > memory zone. To get around this double check CMA's placement before
> > > > > allocating from it.
> > > > 
> > > > As the builtbot pointed out, memblock_start_of_DRAM can't be used from
> > > > non-__init code.  But lookig at it I think throwing that in
> > > > is bogus anyway, as cma_get_base returns a proper physical address
> > > > already.
> > > 
> > > It does indeed, but I'm comparing CMA's base with bitmasks that don't 
> > > take into
> > > account where the memory starts. Say memory starts at 0x8000, and CMA 
> > > falls
> > > into ZONE_DMA [0x8000 0xC000], if you want to compare it with
> > > DMA_BIT_MASK(zone_dma_bits) you're forced to unify the memory bases.
> > > 
> > > That said, I now realize that this doesn't work for ZONE_DMA32 which has 
> > > a hard
> > > limit on 32bit addresses reglardless of the memory base.
> > > 
> > > That said I still need to call memblock_start_of_DRAM() any suggestions 
> > > WRT
> > > that? I could save the value in dma_atomic_pool_init(), which is __init 
> > > code.
> > 
> > The pool must be about a 32-bit physical address.  The offsets can be
> > different for every device..

I now see what you mean.

I was trying to blindly fit CMA with arm64's DMA zone setup, which, as it turns
out, doesn't really honor its purpose. arm64 introduced ZONE_DMA to provide a
30-bit address space, but we're creating it regardless of whether it exists or
not. This creates a mismatch between zone_dma_bits and ZONE_DMA's real
placement. I'll try to look at fixing that in arm64.

> Do you plan to resend this one without the memblock_start_of_DRAM
> thingy?

Yes, sorry for the wait, I've been on vacation and short on time, I'll send it
during the day.

Regards,
Nicolas

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-14 Thread Christoph Hellwig
On Sat, Aug 08, 2020 at 08:33:54AM +0200, Christoph Hellwig wrote:
> On Fri, Aug 07, 2020 at 10:50:19AM +0200, Nicolas Saenz Julienne wrote:
> > On Fri, 2020-08-07 at 07:21 +0200, Christoph Hellwig wrote:
> > > On Thu, Aug 06, 2020 at 08:47:55PM +0200, Nicolas Saenz Julienne wrote:
> > > > There is no guarantee to CMA's placement, so allocating a zone specific
> > > > atomic pool from CMA might return memory from a completely different
> > > > memory zone. To get around this double check CMA's placement before
> > > > allocating from it.
> > > 
> > > As the builtbot pointed out, memblock_start_of_DRAM can't be used from
> > > non-__init code.  But lookig at it I think throwing that in
> > > is bogus anyway, as cma_get_base returns a proper physical address
> > > already.
> > 
> > It does indeed, but I'm comparing CMA's base with bitmasks that don't take 
> > into
> > account where the memory starts. Say memory starts at 0x8000, and CMA 
> > falls
> > into ZONE_DMA [0x8000 0xC000], if you want to compare it with
> > DMA_BIT_MASK(zone_dma_bits) you're forced to unify the memory bases.
> > 
> > That said, I now realize that this doesn't work for ZONE_DMA32 which has a 
> > hard
> > limit on 32bit addresses reglardless of the memory base.
> > 
> > That said I still need to call memblock_start_of_DRAM() any suggestions WRT
> > that? I could save the value in dma_atomic_pool_init(), which is __init 
> > code.
> 
> The pool must be about a 32-bit physical address.  The offsets can be
> different for every device..

Do you plan to resend this one without the memblock_start_of_DRAM
thingy?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-08 Thread Christoph Hellwig
On Fri, Aug 07, 2020 at 10:50:19AM +0200, Nicolas Saenz Julienne wrote:
> On Fri, 2020-08-07 at 07:21 +0200, Christoph Hellwig wrote:
> > On Thu, Aug 06, 2020 at 08:47:55PM +0200, Nicolas Saenz Julienne wrote:
> > > There is no guarantee to CMA's placement, so allocating a zone specific
> > > atomic pool from CMA might return memory from a completely different
> > > memory zone. To get around this double check CMA's placement before
> > > allocating from it.
> > 
> > As the builtbot pointed out, memblock_start_of_DRAM can't be used from
> > non-__init code.  But lookig at it I think throwing that in
> > is bogus anyway, as cma_get_base returns a proper physical address
> > already.
> 
> It does indeed, but I'm comparing CMA's base with bitmasks that don't take 
> into
> account where the memory starts. Say memory starts at 0x8000, and CMA 
> falls
> into ZONE_DMA [0x8000 0xC000], if you want to compare it with
> DMA_BIT_MASK(zone_dma_bits) you're forced to unify the memory bases.
> 
> That said, I now realize that this doesn't work for ZONE_DMA32 which has a 
> hard
> limit on 32bit addresses reglardless of the memory base.
> 
> That said I still need to call memblock_start_of_DRAM() any suggestions WRT
> that? I could save the value in dma_atomic_pool_init(), which is __init code.

The pool must be about a 32-bit physical address.  The offsets can be
different for every device..
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-07 Thread Nicolas Saenz Julienne
On Fri, 2020-08-07 at 07:21 +0200, Christoph Hellwig wrote:
> On Thu, Aug 06, 2020 at 08:47:55PM +0200, Nicolas Saenz Julienne wrote:
> > There is no guarantee to CMA's placement, so allocating a zone specific
> > atomic pool from CMA might return memory from a completely different
> > memory zone. To get around this double check CMA's placement before
> > allocating from it.
> 
> As the builtbot pointed out, memblock_start_of_DRAM can't be used from
> non-__init code.  But lookig at it I think throwing that in
> is bogus anyway, as cma_get_base returns a proper physical address
> already.

It does indeed, but I'm comparing CMA's base with bitmasks that don't take into
account where the memory starts. Say memory starts at 0x8000, and CMA falls
into ZONE_DMA [0x8000 0xC000], if you want to compare it with
DMA_BIT_MASK(zone_dma_bits) you're forced to unify the memory bases.

That said, I now realize that this doesn't work for ZONE_DMA32 which has a hard
limit on 32bit addresses reglardless of the memory base.

That said I still need to call memblock_start_of_DRAM() any suggestions WRT
that? I could save the value in dma_atomic_pool_init(), which is __init code.



signature.asc
Description: This is a digitally signed message part
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread Christoph Hellwig
On Thu, Aug 06, 2020 at 08:47:55PM +0200, Nicolas Saenz Julienne wrote:
> There is no guarantee to CMA's placement, so allocating a zone specific
> atomic pool from CMA might return memory from a completely different
> memory zone. To get around this double check CMA's placement before
> allocating from it.

As the builtbot pointed out, memblock_start_of_DRAM can't be used from
non-__init code.  But lookig at it I think throwing that in
is bogus anyway, as cma_get_base returns a proper physical address
already.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread kernel test robot
Hi Nicolas,

I love your patch! Perhaps something to improve:

[auto build test WARNING on iommu/next]
[also build test WARNING on linus/master v5.8]
[cannot apply to next-20200806]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Nicolas-Saenz-Julienne/dma-pool-fixes/20200807-025101
base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
config: x86_64-randconfig-s031-20200806 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.2-117-g8c7aee71-dirty
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>, old ones prefixed by <<):

>> WARNING: modpost: vmlinux.o(.text+0x2840fa): Section mismatch in reference 
>> from the function atomic_pool_expand() to the function 
>> .meminit.text:memblock_start_of_DRAM()
The function atomic_pool_expand() references
the function __meminit memblock_start_of_DRAM().
This is often because atomic_pool_expand lacks a __meminit
annotation or the annotation of memblock_start_of_DRAM is wrong.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread Nicolas Saenz Julienne
There is no guarantee to CMA's placement, so allocating a zone specific
atomic pool from CMA might return memory from a completely different
memory zone. To get around this double check CMA's placement before
allocating from it.

Signed-off-by: Nicolas Saenz Julienne 
[hch: rebased, added a fallback to the page allocator, allow dipping into
  lower CMA pools]
Signed-off-by: Christoph Hellwig 
---

Changes since v2:
 - Go back to v1 behavior

 kernel/dma/pool.c | 31 ++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 5d071d4a3cba..30d28d761490 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -3,7 +3,9 @@
  * Copyright (C) 2012 ARM Ltd.
  * Copyright (C) 2020 Google LLC
  */
+#include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -55,6 +57,29 @@ static void dma_atomic_pool_size_add(gfp_t gfp, size_t size)
pool_size_kernel += size;
 }
 
+static bool cma_in_zone(gfp_t gfp)
+{
+   unsigned long size;
+   phys_addr_t end;
+   struct cma *cma;
+
+   cma = dev_get_cma_area(NULL);
+   if (!cma)
+   return false;
+
+   size = cma_get_size(cma);
+   if (!size)
+   return false;
+
+   /* CMA can't cross zone boundaries, see cma_activate_area() */
+   end = cma_get_base(cma) - memblock_start_of_DRAM() + size - 1;
+   if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
+   return end <= DMA_BIT_MASK(zone_dma_bits);
+   if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
+   return end <= DMA_BIT_MASK(32);
+   return true;
+}
+
 static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
  gfp_t gfp)
 {
@@ -68,7 +93,11 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t 
pool_size,
 
do {
pool_size = 1 << (PAGE_SHIFT + order);
-   page = alloc_pages(gfp, order);
+   if (cma_in_zone(gfp))
+   page = dma_alloc_from_contiguous(NULL, 1 << order,
+order, false);
+   if (!page)
+   page = alloc_pages(gfp, order);
} while (!page && order-- > 0);
if (!page)
goto out;
-- 
2.28.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu