On 10/15/2018 10:51 AM, Phil Yang (Arm Technology China) wrote:
>> -----Original Message-----
>> From: Ferruh Yigit <[email protected]>
>> Sent: Saturday, October 13, 2018 1:13 AM
>> To: Phil Yang (Arm Technology China) <[email protected]>; [email protected]
>> Cc: nd <[email protected]>; [email protected]
>> Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization
>>
>> On 10/12/2018 10:34 AM, [email protected] wrote:
>>> The cmdline settings of port-numa-config and rxring-numa-config have
>>> been flushed by the following init_config. If we don't configure the
>>> port-numa-config, the virtual device will allocate the device ports to
>>> socket 0. It will cause failure when the socket 0 is unavailable.
>>>
>>> eg:
>>> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
>>> --socket-mem=64 -- --numa --port-numa-config="(0,1)"
>>> --ring-numa-config="(0,1,1),(0,2,1)" -i
>>>
>>> ...
>>> Configuring Port 0 (socket 0)
>>> Failed to setup RX queue:No mempool allocation on the socket 0
>>> EAL: Error - exiting with code: 1
>>> Cause: Start ports failed
>>>
>>> Fix by allocate the devices port to the first available socket or the
>>> socket configured in port-numa-config.
>>
>> I confirm this fixes the issue, by making vdev to allocate from available
>> socket
>> instead of hardcoded socket 0, overall this make sense.
>>
>> But currently there is no way to request mempool form "socket 0" if only
>> cores
>> from "socket 1" provided in "-l", even with "port-numa-config" and "rxring-
>> numa-config".
>> Both this behavior and the problem this patch fixes caused by patch:
>> Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation")
>>
>> It is good to have optimized mempool allocation but I think this shouldn't
>> limit
>> the tool. If user wants mempools from specific socket, let it have.
>>
>> What about changing the default behavior to:
>> 1- Allocate mempool only from socket that coremask provided (current
>> approach)
>> 2- Plus, allocate mempool from sockets of attached devices (this is
>> alternative
>> solution to this patch, your solution seems better for virtual devices but
>> for
>> physical devices allocating from socket it connects can be better)
>> 3- Plus, allocate mempool from sockets provided in "port-numa-config" and
>> "rxring-numa-config"
>>
>> What do you think?
>
> Hi Ferruh,
>
> Totally agreed with your suggestion.
>
> As I understand, allocating mempool from sockets of attached devices will
> enable the cross NUMA scenario for Testpmd.
Yes it will.
>
> Below is my fix for physic port mempool allocate issue. So, is it better to
> separate it into a new patch on the top of this one or rework this one by
> adding below fix? I prefer to add a new one because the current patch has
> already fixed two defects. Anyway, I will follow your comment.
+1 to separate it into a new patch, so I will check existing patch.
Below looks good only not sure if is should be in
`set_default_fwd_ports_config`? Or perhaps `set_default_fwd_lcores_config`?
And port-numa-config and rxring-numa-config still not covered.
>
> 565 static void
>
> 566 set_default_fwd_ports_config(void)
>
> 567 {
>
> 568 › portid_t pt_id;
>
> 569 › int i = 0;
>
> 570
> 571 › RTE_ETH_FOREACH_DEV(pt_id) {
>
> 572 › › fwd_ports_ids[i++] = pt_id;
>
> 573
> + 574 › › /* Update sockets info according to the attached device */
>
> + 575 › › int socket_id = rte_eth_dev_socket_id(pt_id);
> + 576 › › if (socket_id >= 0 && new_socket_id(pt_id)) {
>
> + 577 › › › if (num_sockets >= RTE_MAX_NUMA_NODES) {
>
> + 578 › › › › rte_exit(EXIT_FAILURE,
>
> + 579 › › › › › "Total sockets greater than %u\n",
>
> + 580 › › › › › RTE_MAX_NUMA_NODES);
> + 581 › › › }
>
> + 582 › › › socket_ids[num_sockets++] = socket_id;
> + 583 › › }
> + 584 › }
> + 585
> 586 › nb_cfg_ports = nb_ports;
> 587 › nb_fwd_ports = nb_ports;
>
> 588 }
>
> Thanks
> Phil Yang
>
>>
>>
>>>
>>> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
>>>
>>> Signed-off-by: Phil Yang <[email protected]>
>>> Reviewed-by: Gavin Hu <[email protected]>
>>
>> <...>