Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-15 Thread Joonsoo Kim
On Thu, Jun 12, 2014 at 12:02:38PM +0200, Michal Nazarewicz wrote:
 On Thu, Jun 12 2014, Joonsoo Kim iamjoonsoo@lge.com wrote:
  ppc kvm's cma area management needs alignment constraint on
 
 I've noticed it earlier and cannot seem to get to terms with this.  It
 should IMO be PPC, KVM and CMA since those are acronyms.  But if you
 have strong feelings, it's not a big issue.

Yes, I will fix it.

 
  cma region. So support it to prepare generalization of cma area
  management functionality.
 
  Additionally, add some comments which tell us why alignment
  constraint is needed on cma region.
 
  Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
 
 Acked-by: Michal Nazarewicz min...@mina86.com
 
  diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
  index 8a44c82..bc4c171 100644
  --- a/drivers/base/dma-contiguous.c
  +++ b/drivers/base/dma-contiguous.c
  @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
* @size: Size of the reserved area (in bytes),
* @base: Base address of the reserved area optional, use 0 for any
* @limit: End address of the reserved memory (optional, 0 for any).
  + * @alignment: Alignment for the contiguous memory area, should be
  power of 2
 
 “must be power of 2 or zero”.

Okay.

* @res_cma: Pointer to store the created cma region.
* @fixed: hint about where to place the reserved area
*
  @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
*/
   static int __init __dma_contiguous_reserve_area(phys_addr_t size,
  phys_addr_t base, phys_addr_t limit,
  +   phys_addr_t alignment,
  struct cma **res_cma, bool fixed)
   {
  struct cma *cma = cma_areas[cma_area_count];
  -   phys_addr_t alignment;
  int ret = 0;
   
  -   pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
  -(unsigned long)size, (unsigned long)base,
  -(unsigned long)limit);
  +   pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
  +   __func__, (unsigned long)size, (unsigned long)base,
  +   (unsigned long)limit, (unsigned long)alignment);
 
 Nit: Align with the rest of the arguments, i.e.:
 
 + pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
 +  __func__, (unsigned long)size, (unsigned long)base,
 +  (unsigned long)limit, (unsigned long)alignment);

What's the difference between mine and yours?

Thanks.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-15 Thread Joonsoo Kim
On Thu, Jun 12, 2014 at 12:02:38PM +0200, Michal Nazarewicz wrote:
 On Thu, Jun 12 2014, Joonsoo Kim iamjoonsoo@lge.com wrote:
  ppc kvm's cma area management needs alignment constraint on
 
 I've noticed it earlier and cannot seem to get to terms with this.  It
 should IMO be PPC, KVM and CMA since those are acronyms.  But if you
 have strong feelings, it's not a big issue.

Yes, I will fix it.

 
  cma region. So support it to prepare generalization of cma area
  management functionality.
 
  Additionally, add some comments which tell us why alignment
  constraint is needed on cma region.
 
  Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
 
 Acked-by: Michal Nazarewicz min...@mina86.com
 
  diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
  index 8a44c82..bc4c171 100644
  --- a/drivers/base/dma-contiguous.c
  +++ b/drivers/base/dma-contiguous.c
  @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
* @size: Size of the reserved area (in bytes),
* @base: Base address of the reserved area optional, use 0 for any
* @limit: End address of the reserved memory (optional, 0 for any).
  + * @alignment: Alignment for the contiguous memory area, should be
  power of 2
 
 “must be power of 2 or zero”.

Okay.

* @res_cma: Pointer to store the created cma region.
* @fixed: hint about where to place the reserved area
*
  @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
*/
   static int __init __dma_contiguous_reserve_area(phys_addr_t size,
  phys_addr_t base, phys_addr_t limit,
  +   phys_addr_t alignment,
  struct cma **res_cma, bool fixed)
   {
  struct cma *cma = cma_areas[cma_area_count];
  -   phys_addr_t alignment;
  int ret = 0;
   
  -   pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
  -(unsigned long)size, (unsigned long)base,
  -(unsigned long)limit);
  +   pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
  +   __func__, (unsigned long)size, (unsigned long)base,
  +   (unsigned long)limit, (unsigned long)alignment);
 
 Nit: Align with the rest of the arguments, i.e.:
 
 + pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
 +  __func__, (unsigned long)size, (unsigned long)base,
 +  (unsigned long)limit, (unsigned long)alignment);

What's the difference between mine and yours?

Thanks.
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-12 Thread Joonsoo Kim
On Thu, Jun 12, 2014 at 02:52:20PM +0900, Minchan Kim wrote:
 On Thu, Jun 12, 2014 at 12:21:41PM +0900, Joonsoo Kim wrote:
  ppc kvm's cma area management needs alignment constraint on
  cma region. So support it to prepare generalization of cma area
  management functionality.
  
  Additionally, add some comments which tell us why alignment
  constraint is needed on cma region.
  
  Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
  
  diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
  index 8a44c82..bc4c171 100644
  --- a/drivers/base/dma-contiguous.c
  +++ b/drivers/base/dma-contiguous.c
  @@ -32,6 +32,7 @@
   #include linux/swap.h
   #include linux/mm_types.h
   #include linux/dma-contiguous.h
  +#include linux/log2.h
   
   struct cma {
  unsigned long   base_pfn;
  @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
* @size: Size of the reserved area (in bytes),
* @base: Base address of the reserved area optional, use 0 for any
* @limit: End address of the reserved memory (optional, 0 for any).
  + * @alignment: Alignment for the contiguous memory area, should be power 
  of 2
* @res_cma: Pointer to store the created cma region.
* @fixed: hint about where to place the reserved area
*
 
 Pz, move the all description to new API function rather than internal one.

Reason I leave all description as is is that I will remove it in
following patch. I think that moving these makes patch bigger and hard
to review.

But, if it is necessary, I will do it. :)

 
  @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
*/
   static int __init __dma_contiguous_reserve_area(phys_addr_t size,
  phys_addr_t base, phys_addr_t limit,
  +   phys_addr_t alignment,
  struct cma **res_cma, bool fixed)
   {
  struct cma *cma = cma_areas[cma_area_count];
  -   phys_addr_t alignment;
  int ret = 0;
   
  -   pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
  -(unsigned long)size, (unsigned long)base,
  -(unsigned long)limit);
  +   pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
 
 Why is it called by align_order?

Oops... mistake.
I will fix it.

Thanks.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-12 Thread Michal Nazarewicz
On Thu, Jun 12 2014, Joonsoo Kim iamjoonsoo@lge.com wrote:
 ppc kvm's cma area management needs alignment constraint on

I've noticed it earlier and cannot seem to get to terms with this.  It
should IMO be PPC, KVM and CMA since those are acronyms.  But if you
have strong feelings, it's not a big issue.

 cma region. So support it to prepare generalization of cma area
 management functionality.

 Additionally, add some comments which tell us why alignment
 constraint is needed on cma region.

 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com

Acked-by: Michal Nazarewicz min...@mina86.com

 diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
 index 8a44c82..bc4c171 100644
 --- a/drivers/base/dma-contiguous.c
 +++ b/drivers/base/dma-contiguous.c
 @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
   * @size: Size of the reserved area (in bytes),
   * @base: Base address of the reserved area optional, use 0 for any
   * @limit: End address of the reserved memory (optional, 0 for any).
 + * @alignment: Alignment for the contiguous memory area, should be
   power of 2

“must be power of 2 or zero”.

   * @res_cma: Pointer to store the created cma region.
   * @fixed: hint about where to place the reserved area
   *
 @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
   */
  static int __init __dma_contiguous_reserve_area(phys_addr_t size,
   phys_addr_t base, phys_addr_t limit,
 + phys_addr_t alignment,
   struct cma **res_cma, bool fixed)
  {
   struct cma *cma = cma_areas[cma_area_count];
 - phys_addr_t alignment;
   int ret = 0;
  
 - pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
 -  (unsigned long)size, (unsigned long)base,
 -  (unsigned long)limit);
 + pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
 + __func__, (unsigned long)size, (unsigned long)base,
 + (unsigned long)limit, (unsigned long)alignment);

Nit: Align with the rest of the arguments, i.e.:

+   pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
+__func__, (unsigned long)size, (unsigned long)base,
+(unsigned long)limit, (unsigned long)alignment);

  
   /* Sanity checks */
   if (cma_area_count == ARRAY_SIZE(cma_areas)) {

-- 
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of  o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz(o o)
ooo +--m...@google.com--xmpp:min...@jabber.org--ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-12 Thread Joonsoo Kim
On Thu, Jun 12, 2014 at 02:52:20PM +0900, Minchan Kim wrote:
 On Thu, Jun 12, 2014 at 12:21:41PM +0900, Joonsoo Kim wrote:
  ppc kvm's cma area management needs alignment constraint on
  cma region. So support it to prepare generalization of cma area
  management functionality.
  
  Additionally, add some comments which tell us why alignment
  constraint is needed on cma region.
  
  Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
  
  diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
  index 8a44c82..bc4c171 100644
  --- a/drivers/base/dma-contiguous.c
  +++ b/drivers/base/dma-contiguous.c
  @@ -32,6 +32,7 @@
   #include linux/swap.h
   #include linux/mm_types.h
   #include linux/dma-contiguous.h
  +#include linux/log2.h
   
   struct cma {
  unsigned long   base_pfn;
  @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
* @size: Size of the reserved area (in bytes),
* @base: Base address of the reserved area optional, use 0 for any
* @limit: End address of the reserved memory (optional, 0 for any).
  + * @alignment: Alignment for the contiguous memory area, should be power 
  of 2
* @res_cma: Pointer to store the created cma region.
* @fixed: hint about where to place the reserved area
*
 
 Pz, move the all description to new API function rather than internal one.

Reason I leave all description as is is that I will remove it in
following patch. I think that moving these makes patch bigger and hard
to review.

But, if it is necessary, I will do it. :)

 
  @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
*/
   static int __init __dma_contiguous_reserve_area(phys_addr_t size,
  phys_addr_t base, phys_addr_t limit,
  +   phys_addr_t alignment,
  struct cma **res_cma, bool fixed)
   {
  struct cma *cma = cma_areas[cma_area_count];
  -   phys_addr_t alignment;
  int ret = 0;
   
  -   pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
  -(unsigned long)size, (unsigned long)base,
  -(unsigned long)limit);
  +   pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
 
 Why is it called by align_order?

Oops... mistake.
I will fix it.

Thanks.

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-11 Thread Aneesh Kumar K.V
Joonsoo Kim iamjoonsoo@lge.com writes:

 ppc kvm's cma area management needs alignment constraint on
 cma region. So support it to prepare generalization of cma area
 management functionality.

 Additionally, add some comments which tell us why alignment
 constraint is needed on cma region.

 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com

Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com


 diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
 index 8a44c82..bc4c171 100644
 --- a/drivers/base/dma-contiguous.c
 +++ b/drivers/base/dma-contiguous.c
 @@ -32,6 +32,7 @@
  #include linux/swap.h
  #include linux/mm_types.h
  #include linux/dma-contiguous.h
 +#include linux/log2.h

  struct cma {
   unsigned long   base_pfn;
 @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
   * @size: Size of the reserved area (in bytes),
   * @base: Base address of the reserved area optional, use 0 for any
   * @limit: End address of the reserved memory (optional, 0 for any).
 + * @alignment: Alignment for the contiguous memory area, should be power of 2
   * @res_cma: Pointer to store the created cma region.
   * @fixed: hint about where to place the reserved area
   *
 @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
   */
  static int __init __dma_contiguous_reserve_area(phys_addr_t size,
   phys_addr_t base, phys_addr_t limit,
 + phys_addr_t alignment,
   struct cma **res_cma, bool fixed)
  {
   struct cma *cma = cma_areas[cma_area_count];
 - phys_addr_t alignment;
   int ret = 0;

 - pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
 -  (unsigned long)size, (unsigned long)base,
 -  (unsigned long)limit);
 + pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
 + __func__, (unsigned long)size, (unsigned long)base,
 + (unsigned long)limit, (unsigned long)alignment);

   /* Sanity checks */
   if (cma_area_count == ARRAY_SIZE(cma_areas)) {
 @@ -253,8 +255,17 @@ static int __init 
 __dma_contiguous_reserve_area(phys_addr_t size,
   if (!size)
   return -EINVAL;

 - /* Sanitise input arguments */
 - alignment = PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order);
 + if (alignment  !is_power_of_2(alignment))
 + return -EINVAL;
 +
 + /*
 +  * Sanitise input arguments.
 +  * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
 +  * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
 +  * and CMA property will be broken.
 +  */
 + alignment = max(alignment,
 + (phys_addr_t)PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order));
   base = ALIGN(base, alignment);
   size = ALIGN(size, alignment);
   limit = ~(alignment - 1);
 @@ -302,7 +313,8 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, 
 phys_addr_t base,
  {
   int ret;

 - ret = __dma_contiguous_reserve_area(size, base, limit, res_cma, fixed);
 + ret = __dma_contiguous_reserve_area(size, base, limit, 0,
 + res_cma, fixed);
   if (ret)
   return ret;

 -- 
 1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-11 Thread Minchan Kim
On Thu, Jun 12, 2014 at 12:21:41PM +0900, Joonsoo Kim wrote:
 ppc kvm's cma area management needs alignment constraint on
 cma region. So support it to prepare generalization of cma area
 management functionality.
 
 Additionally, add some comments which tell us why alignment
 constraint is needed on cma region.
 
 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
 
 diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
 index 8a44c82..bc4c171 100644
 --- a/drivers/base/dma-contiguous.c
 +++ b/drivers/base/dma-contiguous.c
 @@ -32,6 +32,7 @@
  #include linux/swap.h
  #include linux/mm_types.h
  #include linux/dma-contiguous.h
 +#include linux/log2.h
  
  struct cma {
   unsigned long   base_pfn;
 @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
   * @size: Size of the reserved area (in bytes),
   * @base: Base address of the reserved area optional, use 0 for any
   * @limit: End address of the reserved memory (optional, 0 for any).
 + * @alignment: Alignment for the contiguous memory area, should be power of 2
   * @res_cma: Pointer to store the created cma region.
   * @fixed: hint about where to place the reserved area
   *

Pz, move the all description to new API function rather than internal one.

 @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
   */
  static int __init __dma_contiguous_reserve_area(phys_addr_t size,
   phys_addr_t base, phys_addr_t limit,
 + phys_addr_t alignment,
   struct cma **res_cma, bool fixed)
  {
   struct cma *cma = cma_areas[cma_area_count];
 - phys_addr_t alignment;
   int ret = 0;
  
 - pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
 -  (unsigned long)size, (unsigned long)base,
 -  (unsigned long)limit);
 + pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,

Why is it called by align_order?

 + __func__, (unsigned long)size, (unsigned long)base,
 + (unsigned long)limit, (unsigned long)alignment);
  
   /* Sanity checks */
   if (cma_area_count == ARRAY_SIZE(cma_areas)) {
 @@ -253,8 +255,17 @@ static int __init 
 __dma_contiguous_reserve_area(phys_addr_t size,
   if (!size)
   return -EINVAL;
  
 - /* Sanitise input arguments */
 - alignment = PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order);
 + if (alignment  !is_power_of_2(alignment))
 + return -EINVAL;
 +
 + /*
 +  * Sanitise input arguments.
 +  * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
 +  * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism

I'm not a native but try for clear documenation.

 Pages both ends in CMA area could be merged into adjacent unmovable
 migratetype page by page allocator's buddy algorithm. In the case,
 you couldn't get a contiguous memory, which is not what we want.

 +  * and CMA property will be broken.
 +  */
 + alignment = max(alignment,
 + (phys_addr_t)PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order));
   base = ALIGN(base, alignment);
   size = ALIGN(size, alignment);
   limit = ~(alignment - 1);
 @@ -302,7 +313,8 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, 
 phys_addr_t base,
  {
   int ret;
  
 - ret = __dma_contiguous_reserve_area(size, base, limit, res_cma, fixed);
 + ret = __dma_contiguous_reserve_area(size, base, limit, 0,
 + res_cma, fixed);
   if (ret)
   return ret;
  
 -- 
 1.7.9.5

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-11 Thread Aneesh Kumar K.V
Joonsoo Kim iamjoonsoo@lge.com writes:

 ppc kvm's cma area management needs alignment constraint on
 cma region. So support it to prepare generalization of cma area
 management functionality.

 Additionally, add some comments which tell us why alignment
 constraint is needed on cma region.

 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com

Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com


 diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
 index 8a44c82..bc4c171 100644
 --- a/drivers/base/dma-contiguous.c
 +++ b/drivers/base/dma-contiguous.c
 @@ -32,6 +32,7 @@
  #include linux/swap.h
  #include linux/mm_types.h
  #include linux/dma-contiguous.h
 +#include linux/log2.h

  struct cma {
   unsigned long   base_pfn;
 @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
   * @size: Size of the reserved area (in bytes),
   * @base: Base address of the reserved area optional, use 0 for any
   * @limit: End address of the reserved memory (optional, 0 for any).
 + * @alignment: Alignment for the contiguous memory area, should be power of 2
   * @res_cma: Pointer to store the created cma region.
   * @fixed: hint about where to place the reserved area
   *
 @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
   */
  static int __init __dma_contiguous_reserve_area(phys_addr_t size,
   phys_addr_t base, phys_addr_t limit,
 + phys_addr_t alignment,
   struct cma **res_cma, bool fixed)
  {
   struct cma *cma = cma_areas[cma_area_count];
 - phys_addr_t alignment;
   int ret = 0;

 - pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
 -  (unsigned long)size, (unsigned long)base,
 -  (unsigned long)limit);
 + pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,
 + __func__, (unsigned long)size, (unsigned long)base,
 + (unsigned long)limit, (unsigned long)alignment);

   /* Sanity checks */
   if (cma_area_count == ARRAY_SIZE(cma_areas)) {
 @@ -253,8 +255,17 @@ static int __init 
 __dma_contiguous_reserve_area(phys_addr_t size,
   if (!size)
   return -EINVAL;

 - /* Sanitise input arguments */
 - alignment = PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order);
 + if (alignment  !is_power_of_2(alignment))
 + return -EINVAL;
 +
 + /*
 +  * Sanitise input arguments.
 +  * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
 +  * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
 +  * and CMA property will be broken.
 +  */
 + alignment = max(alignment,
 + (phys_addr_t)PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order));
   base = ALIGN(base, alignment);
   size = ALIGN(size, alignment);
   limit = ~(alignment - 1);
 @@ -302,7 +313,8 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, 
 phys_addr_t base,
  {
   int ret;

 - ret = __dma_contiguous_reserve_area(size, base, limit, res_cma, fixed);
 + ret = __dma_contiguous_reserve_area(size, base, limit, 0,
 + res_cma, fixed);
   if (ret)
   return ret;

 -- 
 1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region

2014-06-11 Thread Minchan Kim
On Thu, Jun 12, 2014 at 12:21:41PM +0900, Joonsoo Kim wrote:
 ppc kvm's cma area management needs alignment constraint on
 cma region. So support it to prepare generalization of cma area
 management functionality.
 
 Additionally, add some comments which tell us why alignment
 constraint is needed on cma region.
 
 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
 
 diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
 index 8a44c82..bc4c171 100644
 --- a/drivers/base/dma-contiguous.c
 +++ b/drivers/base/dma-contiguous.c
 @@ -32,6 +32,7 @@
  #include linux/swap.h
  #include linux/mm_types.h
  #include linux/dma-contiguous.h
 +#include linux/log2.h
  
  struct cma {
   unsigned long   base_pfn;
 @@ -219,6 +220,7 @@ core_initcall(cma_init_reserved_areas);
   * @size: Size of the reserved area (in bytes),
   * @base: Base address of the reserved area optional, use 0 for any
   * @limit: End address of the reserved memory (optional, 0 for any).
 + * @alignment: Alignment for the contiguous memory area, should be power of 2
   * @res_cma: Pointer to store the created cma region.
   * @fixed: hint about where to place the reserved area
   *

Pz, move the all description to new API function rather than internal one.

 @@ -233,15 +235,15 @@ core_initcall(cma_init_reserved_areas);
   */
  static int __init __dma_contiguous_reserve_area(phys_addr_t size,
   phys_addr_t base, phys_addr_t limit,
 + phys_addr_t alignment,
   struct cma **res_cma, bool fixed)
  {
   struct cma *cma = cma_areas[cma_area_count];
 - phys_addr_t alignment;
   int ret = 0;
  
 - pr_debug(%s(size %lx, base %08lx, limit %08lx)\n, __func__,
 -  (unsigned long)size, (unsigned long)base,
 -  (unsigned long)limit);
 + pr_debug(%s(size %lx, base %08lx, limit %08lx align_order %08lx)\n,

Why is it called by align_order?

 + __func__, (unsigned long)size, (unsigned long)base,
 + (unsigned long)limit, (unsigned long)alignment);
  
   /* Sanity checks */
   if (cma_area_count == ARRAY_SIZE(cma_areas)) {
 @@ -253,8 +255,17 @@ static int __init 
 __dma_contiguous_reserve_area(phys_addr_t size,
   if (!size)
   return -EINVAL;
  
 - /* Sanitise input arguments */
 - alignment = PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order);
 + if (alignment  !is_power_of_2(alignment))
 + return -EINVAL;
 +
 + /*
 +  * Sanitise input arguments.
 +  * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
 +  * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism

I'm not a native but try for clear documenation.

 Pages both ends in CMA area could be merged into adjacent unmovable
 migratetype page by page allocator's buddy algorithm. In the case,
 you couldn't get a contiguous memory, which is not what we want.

 +  * and CMA property will be broken.
 +  */
 + alignment = max(alignment,
 + (phys_addr_t)PAGE_SIZE  max(MAX_ORDER - 1, pageblock_order));
   base = ALIGN(base, alignment);
   size = ALIGN(size, alignment);
   limit = ~(alignment - 1);
 @@ -302,7 +313,8 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, 
 phys_addr_t base,
  {
   int ret;
  
 - ret = __dma_contiguous_reserve_area(size, base, limit, res_cma, fixed);
 + ret = __dma_contiguous_reserve_area(size, base, limit, 0,
 + res_cma, fixed);
   if (ret)
   return ret;
  
 -- 
 1.7.9.5

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html