> On Dec 14, 2018, at 3:54 AM, Anatoly Burakov <anatoly.bura...@intel.com> 
> wrote:
> 
> The external heaps API already implicitly expects start address
> of the external memory area to be page-aligned, but it is not
> enforced or documented. Fix this by implementing additional
> parameter checks at memory add call, and document the page
> alignment requirement explicitly.
> 
> Fixes: 7d75c31014f7 ("malloc: allow adding memory to named heaps")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com>
> Suggested-by: Yongseok Koh <ys...@mellanox.com>
> ---

Acked-by: Yongseok Koh <ys...@mellanox.com>
 
Thanks

> lib/librte_eal/common/include/rte_malloc.h | 4 ++--
> lib/librte_eal/common/rte_malloc.c         | 8 +++-----
> 2 files changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/librte_eal/common/include/rte_malloc.h 
> b/lib/librte_eal/common/include/rte_malloc.h
> index 7249e6aae..a5290b074 100644
> --- a/lib/librte_eal/common/include/rte_malloc.h
> +++ b/lib/librte_eal/common/include/rte_malloc.h
> @@ -282,9 +282,9 @@ rte_malloc_get_socket_stats(int socket,
>  * @param heap_name
>  *   Name of the heap to add memory chunk to
>  * @param va_addr
> - *   Start of virtual area to add to the heap
> + *   Start of virtual area to add to the heap. Must be aligned by 
> ``page_sz``.
>  * @param len
> - *   Length of virtual area to add to the heap
> + *   Length of virtual area to add to the heap. Must be aligned by 
> ``page_sz``.
>  * @param iova_addrs
>  *   Array of page IOVA addresses corresponding to each page in this memory
>  *   area. Can be NULL, in which case page IOVA addresses will be set to
> diff --git a/lib/librte_eal/common/rte_malloc.c 
> b/lib/librte_eal/common/rte_malloc.c
> index 0da5ad5e8..46abbfcf6 100644
> --- a/lib/librte_eal/common/rte_malloc.c
> +++ b/lib/librte_eal/common/rte_malloc.c
> @@ -345,6 +345,9 @@ rte_malloc_heap_memory_add(const char *heap_name, void 
> *va_addr, size_t len,
> 
>       if (heap_name == NULL || va_addr == NULL ||
>                       page_sz == 0 || !rte_is_power_of_2(page_sz) ||
> +                     RTE_ALIGN(len, page_sz) != len ||
> +                     !rte_is_aligned(va_addr, page_sz) ||
> +                     ((len / page_sz) != n_pages && iova_addrs != NULL) ||
>                       strnlen(heap_name, RTE_HEAP_NAME_MAX_LEN) == 0 ||
>                       strnlen(heap_name, RTE_HEAP_NAME_MAX_LEN) ==
>                               RTE_HEAP_NAME_MAX_LEN) {
> @@ -367,11 +370,6 @@ rte_malloc_heap_memory_add(const char *heap_name, void 
> *va_addr, size_t len,
>               goto unlock;
>       }
>       n = len / page_sz;
> -     if (n != n_pages && iova_addrs != NULL) {
> -             rte_errno = EINVAL;
> -             ret = -1;
> -             goto unlock;
> -     }
> 
>       rte_spinlock_lock(&heap->lock);
>       ret = malloc_heap_add_external_memory(heap, va_addr, iova_addrs, n,
> -- 
> 2.17.1

Reply via email to