Hi Anatoly/Yigit, I've prepared a patch to fix this issue. I will send out the patch once the internal review is done.
Thanks, Phil Yang > -----Original Message----- > From: Phil Yang (Arm Technology China) > Sent: Thursday, October 11, 2018 3:12 PM > To: 'Burakov, Anatoly' <anatoly.bura...@intel.com>; Ferruh Yigit > <ferruh.yi...@intel.com>; dev-boun...@dpdk.org; dev@dpdk.org > Cc: bernard.iremon...@intel.com; Gavin Hu (Arm Technology China) > <gavin...@arm.com>; sta...@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool > allocation > > > -----Original Message----- > > From: Burakov, Anatoly <anatoly.bura...@intel.com> > > Sent: Monday, October 8, 2018 7:36 PM > > To: Ferruh Yigit <ferruh.yi...@intel.com>; dev-boun...@dpdk.org; > > dev@dpdk.org > > Cc: bernard.iremon...@intel.com; Gavin Hu (Arm Technology China) > > <gavin...@arm.com>; sta...@dpdk.org; Phil Yang (Arm Technology China) > > <phil.y...@arm.com> > > Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool > > allocation > > > > On 08-Oct-18 12:33 PM, Ferruh Yigit wrote: > > > On 9/12/2018 2:54 AM, dev-boun...@dpdk.org wrote: > > >> By default, testpmd will create membuf pool for all NUMA nodes and > > >> ignore EAL configuration. > > >> > > >> Count the number of available NUMA according to EAL core mask or > > >> core list configuration. Optimized by only creating membuf pool for > > >> those nodes. > > >> > > >> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id") > > >> > > >> Signed-off-by: Phil Yang <phil.y...@arm.com> > > >> Acked-by: Gavin Hu <gavin...@arm.com> > > >> --- > > >> app/test-pmd/testpmd.c | 4 ++-- > > >> 1 file changed, 2 insertions(+), 2 deletions(-) > > >> > > >> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index > > >> ee48db2..a56af2b 100644 > > >> --- a/app/test-pmd/testpmd.c > > >> +++ b/app/test-pmd/testpmd.c > > >> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void) > > >> > > >> nb_lc = 0; > > >> for (i = 0; i < RTE_MAX_LCORE; i++) { > > >> + if (!rte_lcore_is_enabled(i)) > > >> + continue; > > >> sock_num = rte_lcore_to_socket_id(i); > > >> if (new_socket_id(sock_num)) { > > >> if (num_sockets >= RTE_MAX_NUMA_NODES) { @@ - > > 485,8 +487,6 @@ > > >> set_default_fwd_lcores_config(void) > > >> } > > >> socket_ids[num_sockets++] = sock_num; > > >> } > > >> - if (!rte_lcore_is_enabled(i)) > > >> - continue; > > >> if (i == rte_get_master_lcore()) > > >> continue; > > >> fwd_lcores_cpuids[nb_lc++] = i; > > >> > > > > > > > > > This is causing testpmd fail for the case all cores from socket 1 > > > and added a virtual device which will try to allocate memory from socket > > > 0. > > > > > > > > > $ testpmd -l<cores from socket 1> --vdev net_pcap0,iface=lo -- -i > > > ... > > > Failed to setup RX queue:No mempool allocation on the socket 0 > > > EAL: Error - exiting with code: 1 > > > Cause: Start ports failed > > > > > > > > > > It's an open question as to why pcap driver tries to allocate on > > socket > > 0 when everything is on socket 1, but perhaps a better improvement > > would be to take into account not only socket ID's of lcores, but ethdev > devices as well? > > > > -- > > Thanks, > > Anatoly > > Hi Anatoly, > > Agree. > > Since NUMA-aware is enabled default in testpmd, so it should be configurable > for vdev port NUMA setting. > > testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo --socket-mem=64 -- > --numa --port-numa-config="(0,1)" --ring-numa-config="(0,1,1),(0,2,1)" -i > > ... > Configuring Port 0 (socket 0) > Failed to setup RX queue:No mempool allocation on the socket 0 > EAL: Error - exiting with code: 1 > Cause: Start ports failed > > This should be a defect. > > Thanks > Phil Yang