Hi Team,
I have done the installation of packstack pike using ovn as mechanism
driver on centos. I have one controller and two compute nodes.
- Created one tenant geneve based network (added as port to router) and a
flat external network (set as gateway for router).
~~~
[root@controller
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 10:49 PM
To: Kevin Traynor; Jan Scheurich; 王志克; Darrell Ball;
ovs-discuss@openvswitch.org; ovs-...@openvswitch.org
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question
Hi Billy,
Please see my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 9:01 PM
To: 王志克; Darrell Ball; ovs-discuss@openvswitch.org; ovs-...@openvswitch.org;
Kevin Traynor
Subject: RE:
est:
4->3
--
Darrell Ball [mailto:db...@vmware.com]
20170906 23:50
Sender: Huanglili (lee); ovs-discuss@openvswitch.org
CC: b...@nicira.com; caihe; liucheng (J)
OBJ: Re: [ovs-discuss] conntrack: Another ct-clean thread crash bug
Hmm, that see
>
> I think the mention of pinning was confusing me a little. Let me see if I
> fully understand your use case: You don't 'want' to pin
> anything but you are using it as a way to force the distribution of rxq from
> a single nic across to PMDs on different NUMAs. As without
> pinning all rxqs
Hi Billy,
> You are going to have to take the hit crossing the NUMA boundary at some
> point if your NIC and VM are on different NUMAs.
>
> So are you saying that it is more expensive to cross the NUMA boundary from
> the pmd to the VM that to cross it from the NIC to the
> PMD?
Indeed, that
Hmm, that seems odd.
Also, the code change you propose below does not make sense and would likely
cause similar crashes itself.
Maybe, you explain what you are trying to do in your testing ?
Can you say what traffic are you sending and from which ports ?
I’ll take another look at the related
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Wednesday, September 6, 2017 3:02 PM
> To: Jan Scheurich ; O Mahony, Billy
> ; wangzh...@jd.com; Darrell Ball
> ;
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Wednesday, September 6, 2017 2:50 PM
> To: Jan Scheurich ; O Mahony, Billy
> ; wangzh...@jd.com; Darrell Ball
> ;
Hi,
I have compiled and built ovs-dpdk using DPDK v17.08 and OVS v2.8.0. The
NIC that I am using is Mellanox ConnectX-3 Pro, which is a dual port 10G
NIC. The problem with this NIC is that it provides only one PCI address for
both the 10G ports.
So when I am trying to add the two DPDK ports to
On 09/06/2017 02:43 PM, Jan Scheurich wrote:
>>
>> I think the mention of pinning was confusing me a little. Let me see if I
>> fully understand your use case: You don't 'want' to pin
>> anything but you are using it as a way to force the distribution of rxq from
>> a single nic across to PMDs
On 09/06/2017 02:33 PM, Jan Scheurich wrote:
> Hi Billy,
>
>> You are going to have to take the hit crossing the NUMA boundary at some
>> point if your NIC and VM are on different NUMAs.
>>
>> So are you saying that it is more expensive to cross the NUMA boundary from
>> the pmd to the VM that
Hi,
We met another vswitchd crash when we use ct(nat) (ovs+dpdk).
Program terminated with signal 11, Segmentation fault.
#0 0x00574a0b in hmap_remove (node=0x7f150c6e60a8,
hmap=0x7f1553c40780) at lib/hmap.h:270
while (*bucket != node) {
(gdb) bt
#0 0x00574a0b in
Hi Wang,
I think the mention of pinning was confusing me a little. Let me see if I fully
understand your use case: You don't 'want' to pin anything but you are using
it as a way to force the distribution of rxq from a single nic across to PMDs
on different NUMAs. As without pinning all rxqs
Hi Billy,
See my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 7:26 PM
To: 王志克; Darrell Ball; ovs-discuss@openvswitch.org; ovs-...@openvswitch.org;
Kevin Traynor
Subject: RE: [ovs-dev] OVS
Hi Wang,
You are going to have to take the hit crossing the NUMA boundary at some point
if your NIC and VM are on different NUMAs.
So are you saying that it is more expensive to cross the NUMA boundary from the
pmd to the VM that to cross it from the NIC to the PMD?
If so then in that case
Hi Billy,
It depends on the destination of the traffic.
I observed that if the traffic destination is across NUMA socket, the "avg
processing cycles per packet" would increase 60% than the traffic to same NUMA
socket.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy
Hi Kevin,
Consider the scenario:
One host with 1 physical NIC, and the NIC locates on NUMA socket0. There are
lots of VM on this host.
I can see several method to improve the performance:
1) Try to make sure the VM memory used for networking would locate on socket0
forever. Eg, if VM uses 4G
On 09/06/2017 08:03 AM, 王志克 wrote:
> Hi Darrell,
>
> pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
> others, which is not my expectation. Lots of VMs come and go on the fly, and
> manully assignment is not feasible.)
> >>After that PMD threads on cores
Hi Wang,
A change was committed to head of master 2017-08-02 "dpif-netdev: Assign ports
to pmds on non-local numa node" which if I understand your request correctly
will do what you require.
However it is not clear to me why you are pinning rxqs to PMDs in the first
instance. Currently if you
Adding Billy and Kevin
On 9/6/17, 12:22 AM, "Darrell Ball" wrote:
On 9/6/17, 12:03 AM, "王志克" wrote:
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used
for others, which is not my
On 9/6/17, 12:03 AM, "王志克" wrote:
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
others, which is not my expectation. Lots of VMs come and go on the fly, and
manully assignment is not feasible.)
>>After
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
others, which is not my expectation. Lots of VMs come and go on the fly, and
manully assignment is not feasible.)
>>After that PMD threads on cores where RX queues was pinned will
become isolated.
23 matches
Mail list logo