Hi Wang,
I believe that the PMD stats processing cycles includes EMC processing time.
This is just in the context of your results being surprising. It could be a
factor if you are using code where the bug exists. The patch carries a fixes:
tag (I think) that should help you figure out if your
Hi Jan,
Do you have some test data about the cross-NUMA impact?
Thanks.
Br,
Wang Zhike
-Original Message-
From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
Sent: Wednesday, September 06, 2017 9:33 PM
To: O Mahony, Billy; 王志克; Darrell Ball; ovs-discuss@openvswitch.org;
Hi Billy,
In my test, almost all traffic went trough via EMC. So the fix does not impact
the result, especially we want to know the difference (not the exact num).
Can you test to get some data? Thanks.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy
Hi Wang,
https://mail.openvswitch.org/pipermail/ovs-dev/2017-August/337309.html
I see it's been acked and is due to be pushed to master with other changes on
the dpdk merge branch so you'll have to apply it manually for now.
/Billy.
> -Original Message-
> From: 王志克
Hi Billy,
I used ovs2.7.0. I searched the git log, and not sure which commit it is. Do
you happen to know?
Yes, I cleared the stats after traffic run.
Br,
Wang Zhike
From: "O Mahony, Billy"
To: "wangzh...@jd.com" , Jan Scheurich
Hi Wang,
Thanks for the figures. Unexpected results as you say. Two things come to mind:
I’m not sure what code you are using but the cycles per packet statistic was
broken for a while recently. Ilya posted a patch to fix it so make sure you
have that patch included.
Also remember to reset
Hi All,
I tested below cases, and get some performance data. The data shows there is
little impact for cross NUMA communication, which is different from my
expectation. (Previously I mentioned that cross NUMA would add 60% cycles, but
I can NOT reproduce it any more).
@Jan,
You mentioned
Hi All,
I tested below cases, and get some performance data. The data shows there is
little impact for cross NUMA communication, which is different from my
expectation. (Previously I mentioned that cross NUMA would add 60% cycles, but
I can NOT reproduce it any more).
@Jan,
You mentioned
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 10:49 PM
To: Kevin Traynor; Jan Scheurich; 王志克; Darrell Ball;
ovs-discuss@openvswitch.org; ovs-...@openvswitch.org
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question
Hi Billy,
Please see my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 9:01 PM
To: 王志克; Darrell Ball; ovs-discuss@openvswitch.org; ovs-...@openvswitch.org;
Kevin Traynor
Subject: RE:
>
> I think the mention of pinning was confusing me a little. Let me see if I
> fully understand your use case: You don't 'want' to pin
> anything but you are using it as a way to force the distribution of rxq from
> a single nic across to PMDs on different NUMAs. As without
> pinning all rxqs
Hi Billy,
> You are going to have to take the hit crossing the NUMA boundary at some
> point if your NIC and VM are on different NUMAs.
>
> So are you saying that it is more expensive to cross the NUMA boundary from
> the pmd to the VM that to cross it from the NIC to the
> PMD?
Indeed, that
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Wednesday, September 6, 2017 3:02 PM
> To: Jan Scheurich ; O Mahony, Billy
> ; wangzh...@jd.com; Darrell Ball
> ;
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Wednesday, September 6, 2017 2:50 PM
> To: Jan Scheurich ; O Mahony, Billy
> ; wangzh...@jd.com; Darrell Ball
> ;
On 09/06/2017 02:43 PM, Jan Scheurich wrote:
>>
>> I think the mention of pinning was confusing me a little. Let me see if I
>> fully understand your use case: You don't 'want' to pin
>> anything but you are using it as a way to force the distribution of rxq from
>> a single nic across to PMDs
On 09/06/2017 02:33 PM, Jan Scheurich wrote:
> Hi Billy,
>
>> You are going to have to take the hit crossing the NUMA boundary at some
>> point if your NIC and VM are on different NUMAs.
>>
>> So are you saying that it is more expensive to cross the NUMA boundary from
>> the pmd to the VM that
Hi Wang,
I think the mention of pinning was confusing me a little. Let me see if I fully
understand your use case: You don't 'want' to pin anything but you are using
it as a way to force the distribution of rxq from a single nic across to PMDs
on different NUMAs. As without pinning all rxqs
Hi Billy,
See my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 7:26 PM
To: 王志克; Darrell Ball; ovs-discuss@openvswitch.org; ovs-...@openvswitch.org;
Kevin Traynor
Subject: RE: [ovs-dev] OVS
Hi Wang,
You are going to have to take the hit crossing the NUMA boundary at some point
if your NIC and VM are on different NUMAs.
So are you saying that it is more expensive to cross the NUMA boundary from the
pmd to the VM that to cross it from the NIC to the PMD?
If so then in that case
Hi Billy,
It depends on the destination of the traffic.
I observed that if the traffic destination is across NUMA socket, the "avg
processing cycles per packet" would increase 60% than the traffic to same NUMA
socket.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy
On 09/06/2017 08:03 AM, 王志克 wrote:
> Hi Darrell,
>
> pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
> others, which is not my expectation. Lots of VMs come and go on the fly, and
> manully assignment is not feasible.)
> >>After that PMD threads on cores
Hi Wang,
A change was committed to head of master 2017-08-02 "dpif-netdev: Assign ports
to pmds on non-local numa node" which if I understand your request correctly
will do what you require.
However it is not clear to me why you are pinning rxqs to PMDs in the first
instance. Currently if you
Adding Billy and Kevin
On 9/6/17, 12:22 AM, "Darrell Ball" wrote:
On 9/6/17, 12:03 AM, "王志克" wrote:
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used
for others, which is not my
On 9/6/17, 12:03 AM, "王志克" wrote:
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
others, which is not my expectation. Lots of VMs come and go on the fly, and
manully assignment is not feasible.)
>>After
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
others, which is not my expectation. Lots of VMs come and go on the fly, and
manully assignment is not feasible.)
>>After that PMD threads on cores where RX queues was pinned will
become isolated.
You could use pmd-rxq-affinity for the queues you want serviced locally and
let the others go remote
On 9/5/17, 8:14 PM, "王志克" wrote:
It is a bit different from my expectation.
I have separate CPU and pmd for each NUMA node. However, the physical NIC
It is a bit different from my expectation.
I have separate CPU and pmd for each NUMA node. However, the physical NIC only
locates on NUMA socket0. So only part of CPU and pmd (the ones in same NUMA
node) can poll the physical NIC. Since I have multiple rx queue, I hope part
queues can be
This same numa node limitation was already removed, although same numa is
preferred for performance reasons.
commit c37813fdb030b4270d05ad61943754f67021a50d
Author: Billy O'Mahony
Date: Tue Aug 1 14:38:43 2017 -0700
dpif-netdev: Assign ports to pmds on non-local
28 matches
Mail list logo