Previously, if mmap failed to map page address at requested
address, we were attempting to unmap the wrong address. Fix it
by unmapping our actual mapped address, and jump further to
avoid unmapping memory that is not allocated.

Coverity issue: 272602

Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime")
Cc: anatoly.bura...@intel.com

Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com>
---
 lib/librte_eal/linuxapp/eal/eal_memalloc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c 
b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
index 8420a26..6a75e5b 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
@@ -466,7 +466,8 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
        }
        if (va != addr) {
                RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__);
-               goto mapped;
+               munmap(va, alloc_sz);
+               goto resized;
        }
        /* for non-single file segments, we can close fd here */
        if (!internal_config.single_file_segments)
-- 
2.7.4

Reply via email to