On 1/29/23 02:33, Cheng Li wrote:
> On Fri, Jan 27, 2023 at 04:04:55PM +0100, Ilya Maximets wrote:
>> On 1/24/23 16:52, Kevin Traynor wrote:
>>> On 08/01/2023 03:55, Cheng Li wrote:
>>>> In my test, if one logical core is pinned to PMD thread while the
>>>> other logical(of the same physical core) is not. The PMD
>>>> performance is affected the by the not-pinned logical core load.
>>>> This maks it difficult to estimate the loads during a dry-run.
>>>>
>>>> Signed-off-by: Cheng Li <lic...@chinatelecom.cn>
>>>> ---
>>>>   Documentation/topics/dpdk/pmd.rst | 4 ++++
>>>>   1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/Documentation/topics/dpdk/pmd.rst 
>>>> b/Documentation/topics/dpdk/pmd.rst
>>>> index 9006fd4..b220199 100644
>>>> --- a/Documentation/topics/dpdk/pmd.rst
>>>> +++ b/Documentation/topics/dpdk/pmd.rst
>>>> @@ -312,6 +312,10 @@ If not set, the default variance improvement 
>>>> threshold is 25%.
>>>>       when all PMD threads are running on cores from a single NUMA node. 
>>>> In this
>>>>       case cross-NUMA datapaths will not change after reassignment.
>>>>   +    For the same reason, please ensure that the pmd threads are pinned 
>>>> to SMT
>>>> +    siblings if HyperThreading is enabled. Otherwise, PMDs within a NUMA 
>>>> may
>>>> +    not have the same performance.
>>
>> Uhm... Am I reading this wrong or this note suggests to pin PMD threads
>> to SMT siblings?  It sounds like that's the opposite of what you were
>> trying to say.  Siblings are sharing the same physical core, so if some
>> PMDs are pinned to siblings, the load prediction can not work correctly.
> 
> Thanks for the review, Ilya.
> 
> The note indeed suggests to pin PMD threads to sliblings. Sliblings share
> the same physical core, if PMDs pin one slibling while leaving the other
> slibling of the same physical core not pinned, the load prediction may
> not work correctly. Because the pinned slibling performance may affected
> by the not-pinned slibling workload. So we sugguest to pin both
> sliblings of the same physical core.

But this makes sense only if all the PMD threads are on siblings of the
same physical core.  If more than one physical core is involved, the load
calculations will be incorrect.  For example, let's say we have 4 threads
A, B, C and D, where A and B are siblings and C and D are siblings.  And
it happened that we have only 2 ports, both of which are assigned to A.
It makes a huge difference whether we move one of the ports from A to B
or if we move it from A to C.  It is an oversimplified example, but we
can't rely on load calculations in general case if PMD threads are running
on SMT siblings.

> 
> 
>>
>> Nit: s/pmd/PMD/
>>
>> Best regards, Ilya Maximets.
>>
>>>> +
>>>>   The minimum time between 2 consecutive PMD auto load balancing 
>>>> iterations can
>>>>   also be configured by::
>>>>   
>>>
>>> I don't think it's a hard requirement as siblings should not impact as much 
>>> as cross-numa might but it's probably good advice in general.
>>>
>>> Acked-by: Kevin Traynor <ktray...@redhat.com>
>>>
>>> _______________________________________________
>>> dev mailing list
>>> d...@openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>>
>>

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to