On 26-Nov-18 9:15 AM, Asaf Sinai wrote:
Hi,

We have 2 NUMAs in our system, and we try to allocate a single DPDK memory pool 
on each NUMA.
However, we see no difference when enabling/disabling 
"CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration.
We expected that disabling it will allocate pools only on one NUMA (probably NUMA0), but it 
actually allocates pools on both NUMAs, according to "socket_id" parameter passed to 
"rte_mempool_create" API.
We have 192GB memory, so NUMA1 memory starts from address: 0x1800000000.
As you can see below, "undDpdkPoolNameSocket_1" was indeed allocated on NUMA1, as we 
wanted, although "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" is disabled:

CONFIG_RTE_LIBRTE_VHOST_NUMA=n
CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n

created poolName=undDpdkPoolNameSocket_0, nbufs=887808, bufferSize=2432, 
total=2059MB
(memZone: name=MP_undDpdkPoolNameSocket_0, socket_id=0, 
vaddr=0x1f2c0427d00-0x1f2c05abe00, paddr=0x178e627d00-0x178e7abe00, 
len=1589504, hugepage_sz=2MB)
created poolName=undDpdkPoolNameSocket_1, nbufs=887808, bufferSize=2432, 
total=2059MB
(memZone: name=MP_undDpdkPoolNameSocket_1, socket_id=1, 
vaddr=0x1f57fa7be40-0x1f57fbfff40, paddr=0x2f8247be40-0x2f825fff40, 
len=1589504, hugepage_sz=2MB)

Does anyone know what is "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration 
used for?

Thanks,
Asaf


Hi Asaf,

I cannot reproduce this behavior. Just tried running testpmd with DPDK 18.08 as well as latest master [1], and DPDK could not successfully allocate a mempool on socket 1.

Did you reconfigure and recompile DPDK after this config change?

[1] Latest master will crash on init in this configuration, fix: http://patches.dpdk.org/patch/48338/

--
Thanks,
Anatoly

Reply via email to