On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
If a device reports addressing limitations through a dma mask,
the IOVAs for mapped memory needs to be checked out for ensuring
correct functionality.

Previous patches introduced this DMA check for main memory code
currently being used but other options like legacy memory and the
no hugepages one need to be also considered.

This patch adds the DMA check for those cases.

Signed-off-by: Alejandro Lucero <alejandro.luc...@netronome.com>
---

IMO this needs to be integrated with patch 5.

  lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++
  1 file changed, 17 insertions(+)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c 
b/lib/librte_eal/linuxapp/eal/eal_memory.c
index fce86fda6..2a3a8c7a3 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void)
addr = RTE_PTR_ADD(addr, (size_t)page_sz);
                }
+               if (mcfg->dma_maskbits) {
+                       if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
+                               RTE_LOG(ERR, EAL,
+                                       "%s(): couldn't allocate memory due to DMA 
mask\n",

I would use suggested rewording from patch 5 :)

+                                       __func__);
+                               goto fail;
+                       }
+               }
                return 0;
        }
@@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void)
                rte_fbarray_destroy(&msl->memseg_arr);
        }
+ if (mcfg->dma_maskbits) {
+               if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
+                       RTE_LOG(ERR, EAL,
+                               "%s(): couldn't allocate memory due to DMA 
mask\n",

Same as above.

+                               __func__);
+                       goto fail;
+               }
+       }
+
        return 0;
fail:



--
Thanks,
Anatoly

Reply via email to