Re: [ovs-discuss] [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port

2017-09-08 Thread 王志克
Hi All,



I tested below cases, and get some performance data. The data shows there is 
little impact for cross NUMA communication, which is different from my 
expectation. (Previously I mentioned that cross NUMA would add 60% cycles, but 
I can NOT reproduce it any more).



@Jan,

You mentioned cross NUMA communication would cost lots more cycles. Can you 
share your data? I am not sure whether I made some mistake or not.



@All,

Welcome your data if you have data for similar cases. Thanks.



Case1: VM0->PMD0->NIC0

Case2:VM1->PMD1->NIC0

Case3:VM1->PMD0->NIC0

Case4:NIC0->PMD0->VM0

Case5:NIC0->PMD1->VM1

Case6:NIC0->PMD0->VM1



 VM Tx Mpps   Host Tx Mpps  avg cycles per packet   avg 
processing cycles per packet

Case1   1.4   1.4   512  415

Case2   1.3   1.3   537  436

Case3   1.351.35  514  390



  VM Rx MppsHost Rx Mpps  avg cycles per packet   avg processing 
cycles per packet

Case4   1.3   1.3   549  533

Case5   1.3   1.3   559  540

Case6   1.28 1.28 568  551



Br,

Wang Zhike



-Original Message-
From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
Sent: Wednesday, September 06, 2017 9:33 PM
To: O Mahony, Billy; 王志克; Darrell Ball; 
ovs-discuss@openvswitch.org; 
ovs-...@openvswitch.org; Kevin Traynor
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port



Hi Billy,



> You are going to have to take the hit crossing the NUMA boundary at some 
> point if your NIC and VM are on different NUMAs.

>

> So are you saying that it is more expensive to cross the NUMA boundary from 
> the pmd to the VM that to cross it from the NIC to the

> PMD?



Indeed, that is the case: If the NIC crosses the QPI bus when storing packets 
in the remote NUMA there is no cost involved for the PMD. (The QPI bandwidth is 
typically not a bottleneck.) The PMD only performs local memory access.



On the other hand, if the PMD crosses the QPI when copying packets into a 
remote VM, there is a huge latency penalty involved, consuming lots of PMD 
cycles that cannot be spent on processing packets. We at Ericsson have observed 
exactly this behavior.



This latency penalty becomes even worse when the LLC cache hit rate is degraded 
due to LLC cache contention with real VNFs and/or unfavorable packet buffer 
re-use patterns as exhibited by real VNFs compared to typical synthetic 
benchmark apps like DPDK testpmd.



>

> If so then in that case you'd like to have two (for example) PMDs polling 2 
> queues on the same NIC. With the PMDs on each of the

> NUMA nodes forwarding to the VMs local to that NUMA?

>

> Of course your NIC would then also need to be able know which VM (or at least 
> which NUMA the VM is on) in order to send the frame

> to the correct rxq.



That would indeed be optimal but hard to realize in the general case (e.g. with 
VXLAN encapsulation) as the actual destination is only known after tunnel pop. 
Here perhaps some probabilistic steering of RSS hash values based on measured 
distribution of final destinations might help in the future.



But even without that in place, we need PMDs on both NUMAs anyhow (for 
NUMA-aware polling of vhostuser ports), so why not use them to also poll remote 
eth ports. We can achieve better average performance with fewer PMDs than with 
the current limitation to NUMA-local polling.



BR, Jan


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Tenant multicast traffic on OpenStack with OVN

2017-09-08 Thread Tom Verdaat
Hi all,

I've been looking into how we can enable multicast traffic for our
OpenStack tenants. We once had this on an old OpenStack release that was
using nova-networking and a flat network, but lost this when moving to a
Neutron provider networking setup. Apparently - according to Cisco - it
doesn't work with a regular Neutron ML2 + OVS setup due to what the neutron
L3 agent does with namespaces and iptables.

In reply to an audience questions about it during the Boston OpenStack
Summit talk on OVN somebody mentioned that OVS can handle multicast traffic
just fine and that this means using OVN it could, potentially, also work
for tenants. So with the new OpenStack Pike release, OVN active/passive HA
support making it suitable for production and OVN doing away with all the
Neutron agents (including the L3 agent?) things are looking up. Just
wondering if anyone could tell me:

Would multicast work when using Neutron with OVN?

If so, would it work with provider networks, flat networking, or both?

Any insights into the state of multicast supoprt is greatly appreciated!

Thanks,

Tom
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] conntrack: Another ct-clean thread crash bug

2017-09-08 Thread wangyunjian
The operations of buckets->connections hmap should with a lock to protect 
between
ovs-vswitchd thread and pmd thread(or ct clean thread).But conn_clean() will 
release the lock.
This time, the hmap maybe change by other thread and the node->next maybe 
remove from hmap.

process_one(){  // pmd thread
ct_lock_lock(ctb.lock);
conn_clean (){
ct_lock_unlock(ctb.lock);
...
ct_lock_lock(ctb.lock)
}
ct_lock_unlock(ctb.lock);
}

conntrack_flush() {   //main thread
ct_lock_lock(ctb.lock);
nat_clean () {
ct_lock_unlock(ctb.lock);
...
ct_lock_lock(ctb.lock)
}
ct_lock_unlock(ctb.lock);
}

clean_thread_main() { // ct clean thread
ct_lock_lock(ctb.lock);
conn_clean (){
ct_lock_unlock(ctb.lock);
...
ct_lock_lock(ctb.lock)
}
ct_lock_unlock(ctb.lock);
}

> -Original Message-
> From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-
> boun...@openvswitch.org] On Behalf Of Darrell Ball
> Sent: Wednesday, September 06, 2017 11:50 PM
> To: Huanglili (lee) ; ovs-
> disc...@openvswitch.org
> Cc: b...@nicira.com; caihe ; liucheng (J)
> 
> Subject: Re: [ovs-discuss] conntrack: Another ct-clean thread crash bug
> 
> Hmm, that seems odd.
> Also, the code change you propose below does not make sense and would
> likely cause similar crashes itself.
> 
> Maybe, you explain what you are trying to do in your testing ?
> Can you say what traffic are you sending and from which ports ?
> 
> I’ll take another look at the related code.
> 
> Darrell
> 
> 
> On 9/6/17, 6:14 AM, "Huanglili (lee)"  wrote:
> 
> Hi,
> We met another vswitchd crash when we use ct(nat) (ovs+dpdk).
> 
> Program terminated with signal 11, Segmentation fault.
> #0  0x00574a0b in hmap_remove (node=0x7f150c6e60a8,
> hmap=0x7f1553c40780) at lib/hmap.h:270
>   while (*bucket != node) {
> 
> (gdb) bt
> #0  0x00574a0b in hmap_remove (node=0x7f150c6e60a8,
> hmap=0x7f1553c40780)
> #1  sweep_bucket (limit=1808, now=563303851, ctb=0x7f1553c40778,
> ct=0x7f1553c3f9a8)
> #2  conntrack_clean (now=563303851, ct=0x7f1553c3f9a8)
> #3  clean_thread_main (f_=0x7f1553c3f9a8)
> 
> This crash can be triggered by using following flows, maybe the flows are
> not reasonable, but shouldn't trigger crash
> "table=0,priority=2,in_port=1 actions=resubmit(,2)
> table=0,priority=2,in_port=4 actions=resubmit(,2)
> table=0,priority=0 actions=drop
> table=0,priority=1 actions=resubmit(,10)
> table=1,priority=0 actions=resubmit(,14)
> table=2,priority=0 actions=resubmit(,4)
> table=4,priority=0 actions=resubmit(,14)
> table=10,priority=2,arp actions=resubmit(,12)
> table=10,priority=1,dl_src=90:E2:BA:69:CD:61 actions=resubmit(,1)
> table=10,priority=0 actions=drop
> 
> table=12,priority=3,arp,dl_src=90:E2:BA:69:CD:61,arp_spa=194.168.100.1,arp
> _sha=90:E2:BA:69:CD:61 actions=resubmit(,1)
> table=12,priority=2,arp actions=drop
> table=14,priority=6,ip actions=ct(table=16,zone=1)
> table=14,priority=0 actions=resubmit(,20)
> table=14,priority=20,ip,ip_frag=yes,actions=resubmit(,18)
> table=16,priority=20,ct_state=+est+trk,ip actions=resubmit(,20)
> table=16,priority=15,ct_state=+rel+trk,ip actions=resubmit(,20)
> table=16,priority=10,ct_mark=0x8000/0x8000,udp
> actions=resubmit(,20)
> table=16,priority=5,ct_state=+new+trk,ip,in_port=3 actions=resubmit(,18)
> table=16,priority=5,ct_state=+new+trk,ip,in_port=4 actions=resubmit(,18)
> table=16,priority=5,ct_state=+new+trk,ip,in_port=2
> actions=ct(commit,zone=1,exec(load:0x1-
> >NXM_NX_CT_MARK[31])),output:4
> table=16,priority=5,ct_state=+new+trk,ip,in_port=1
> actions=ct(commit,zone=1,exec(load:0x1-
> >NXM_NX_CT_MARK[31])),output:3
> table=18,priority=0,in_port=3 actions=ct(zone=1,table=24)
> table=18,priority=0,in_port=2 actions=output:4
> table=18,priority=0,in_port=4,ip
> actions=ct(commit,zone=1,nat(dst=194.168.100.1)),2
> table=18,priority=0,in_port=1 actions=output:3
> table=20,priority=10,in_port=3,ip actions=ct(zone=1,table=22)
> table=20,priority=10,in_port=4,ip actions=ct(zone=1,table=23)
> table=20,priority=1 actions=ct(zone=1,table=18)
> table=22,priority=10,in_port=3 action=4
> table=23,priority=10,in_port=4 action=3
> table=24,priority=10,in_port=3 action=1"
> 
> The networking:
> vm
>  |
> br-ply - br-linux
>  |
> br-int
> 
> We find rev_conn is in the list of ctb->exp_lists[] sometimes.
> The following change will solve this problem, but we can't explain why
> 
> $ git diff
> diff --git a/lib/conntrack.c b/lib/conntrack.c
> index 419cb1

Re: [ovs-discuss] [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port

2017-09-08 Thread O Mahony, Billy
Hi Wang,

Thanks for the figures. Unexpected results as you say. Two things come to mind:

I’m not sure what code you are using but the cycles per packet statistic was 
broken for a while recently. Ilya posted a patch to fix it so make sure you 
have that patch included.

Also remember to reset the pmd stats after you start your traffic and then 
measure after a short duration.

Regards,
Billy.



From: 王志克 [mailto:wangzh...@jd.com]
Sent: Friday, September 8, 2017 8:01 AM
To: Jan Scheurich ; O Mahony, Billy 
; Darrell Ball ; 
ovs-discuss@openvswitch.org; ovs-...@openvswitch.org; Kevin Traynor 

Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port


Hi All,



I tested below cases, and get some performance data. The data shows there is 
little impact for cross NUMA communication, which is different from my 
expectation. (Previously I mentioned that cross NUMA would add 60% cycles, but 
I can NOT reproduce it any more).



@Jan,

You mentioned cross NUMA communication would cost lots more cycles. Can you 
share your data? I am not sure whether I made some mistake or not.



@All,

Welcome your data if you have data for similar cases. Thanks.



Case1: VM0->PMD0->NIC0

Case2:VM1->PMD1->NIC0

Case3:VM1->PMD0->NIC0

Case4:NIC0->PMD0->VM0

Case5:NIC0->PMD1->VM1

Case6:NIC0->PMD0->VM1



  VM Tx Mpps  Host Tx Mpps  avg cycles per packet   avg processing 
cycles per packet

Case1 1.4   1.4 512 415

Case2 1.3   1.3 537 436

Case3 1.351.35   514 390



   VM Rx MppsHost Rx Mpps  avg cycles per packet   avg processing 
cycles per packet

Case4 1.3   1.3 549 533

Case5 1.3   1.3 559 540

Case6 1.28 1.28  568 551



Br,

Wang Zhike



-Original Message-
From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
Sent: Wednesday, September 06, 2017 9:33 PM
To: O Mahony, Billy; 王志克; Darrell Ball; 
ovs-discuss@openvswitch.org; 
ovs-...@openvswitch.org; Kevin Traynor
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port



Hi Billy,



> You are going to have to take the hit crossing the NUMA boundary at some 
> point if your NIC and VM are on different NUMAs.

>

> So are you saying that it is more expensive to cross the NUMA boundary from 
> the pmd to the VM that to cross it from the NIC to the

> PMD?



Indeed, that is the case: If the NIC crosses the QPI bus when storing packets 
in the remote NUMA there is no cost involved for the PMD. (The QPI bandwidth is 
typically not a bottleneck.) The PMD only performs local memory access.



On the other hand, if the PMD crosses the QPI when copying packets into a 
remote VM, there is a huge latency penalty involved, consuming lots of PMD 
cycles that cannot be spent on processing packets. We at Ericsson have observed 
exactly this behavior.



This latency penalty becomes even worse when the LLC cache hit rate is degraded 
due to LLC cache contention with real VNFs and/or unfavorable packet buffer 
re-use patterns as exhibited by real VNFs compared to typical synthetic 
benchmark apps like DPDK testpmd.



>

> If so then in that case you'd like to have two (for example) PMDs polling 2 
> queues on the same NIC. With the PMDs on each of the

> NUMA nodes forwarding to the VMs local to that NUMA?

>

> Of course your NIC would then also need to be able know which VM (or at least 
> which NUMA the VM is on) in order to send the frame

> to the correct rxq.



That would indeed be optimal but hard to realize in the general case (e.g. with 
VXLAN encapsulation) as the actual destination is only known after tunnel pop. 
Here perhaps some probabilistic steering of RSS hash values based on measured 
distribution of final destinations might help in the future.



But even without that in place, we need PMDs on both NUMAs anyhow (for 
NUMA-aware polling of vhostuser ports), so why not use them to also poll remote 
eth ports. We can achieve better average performance with fewer PMDs than with 
the current limitation to NUMA-local polling.



BR, Jan


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port

2017-09-08 Thread 王志克
Hi Billy,

I used ovs2.7.0. I searched the git log, and not sure which commit it is. Do 
you happen to know?

Yes, I cleared the stats after traffic run.

Br,
Wang Zhike


From: "O Mahony, Billy" 
To: "wangzh...@jd.com" , Jan Scheurich
, Darrell Ball ,
"ovs-discuss@openvswitch.org" ,
"ovs-...@openvswitch.org" , Kevin Traynor

Subject: Re: [ovs-dev] OVS DPDK NUMA pmd assignment question for
physical port
Message-ID:
<03135aea779d444e90975c2703f148dc58c19...@irsmsx107.ger.corp.intel.com>

Content-Type: text/plain; charset="utf-8"

Hi Wang,

Thanks for the figures. Unexpected results as you say. Two things come to mind:

I?m not sure what code you are using but the cycles per packet statistic was 
broken for a while recently. Ilya posted a patch to fix it so make sure you 
have that patch included.

Also remember to reset the pmd stats after you start your traffic and then 
measure after a short duration.

Regards,
Billy.



From: ??? [mailto:wangzh...@jd.com]
Sent: Friday, September 8, 2017 8:01 AM
To: Jan Scheurich ; O Mahony, Billy 
; Darrell Ball ; 
ovs-discuss@openvswitch.org; ovs-...@openvswitch.org; Kevin Traynor 

Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port


Hi All,



I tested below cases, and get some performance data. The data shows there is 
little impact for cross NUMA communication, which is different from my 
expectation. (Previously I mentioned that cross NUMA would add 60% cycles, but 
I can NOT reproduce it any more).



@Jan,

You mentioned cross NUMA communication would cost lots more cycles. Can you 
share your data? I am not sure whether I made some mistake or not.



@All,

Welcome your data if you have data for similar cases. Thanks.



Case1: VM0->PMD0->NIC0

Case2:VM1->PMD1->NIC0

Case3:VM1->PMD0->NIC0

Case4:NIC0->PMD0->VM0

Case5:NIC0->PMD1->VM1

Case6:NIC0->PMD0->VM1



? VM Tx Mpps  Host Tx Mpps  avg cycles per packet   avg processing 
cycles per packet

Case1 1.4   1.4 512 415

Case2 1.3   1.3 537 436

Case3 1.351.35   514 390



?  VM Rx MppsHost Rx Mpps  avg cycles per packet   avg processing 
cycles per packet

Case4 1.3   1.3 549 533

Case5 1.3   1.3 559 540

Case6 1.28 1.28  568 551



Br,

Wang Zhike



-Original Message-
From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
Sent: Wednesday, September 06, 2017 9:33 PM
To: O Mahony, Billy; ???; Darrell Ball; 
ovs-discuss@openvswitch.org; 
ovs-...@openvswitch.org; Kevin Traynor
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port



Hi Billy,



> You are going to have to take the hit crossing the NUMA boundary at some 
> point if your NIC and VM are on different NUMAs.

>

> So are you saying that it is more expensive to cross the NUMA boundary from 
> the pmd to the VM that to cross it from the NIC to the

> PMD?



Indeed, that is the case: If the NIC crosses the QPI bus when storing packets 
in the remote NUMA there is no cost involved for the PMD. (The QPI bandwidth is 
typically not a bottleneck.) The PMD only performs local memory access.



On the other hand, if the PMD crosses the QPI when copying packets into a 
remote VM, there is a huge latency penalty involved, consuming lots of PMD 
cycles that cannot be spent on processing packets. We at Ericsson have observed 
exactly this behavior.



This latency penalty becomes even worse when the LLC cache hit rate is degraded 
due to LLC cache contention with real VNFs and/or unfavorable packet buffer 
re-use patterns as exhibited by real VNFs compared to typical synthetic 
benchmark apps like DPDK testpmd.



>

> If so then in that case you'd like to have two (for example) PMDs polling 2 
> queues on the same NIC. With the PMDs on each of the

> NUMA nodes forwarding to the VMs local to that NUMA?

>

> Of course your NIC would then also need to be able know which VM (or at least 
> which NUMA the VM is on) in order to send the frame

> to the correct rxq.



That would indeed be optimal but hard to realize in the general case (e.g. with 
VXLAN encapsulation) as the actual destination is only known after tunnel pop. 
Here perhaps some probabilistic steering of RSS hash values based on measured 
distribution of final destinations might help in the future.



But even without that in place, we need PMDs on both NUMAs anyhow (for 
NUMA-aware polling of vhostuser ports), so why not use them to also poll remote 
eth ports. We can achieve better average performance with fewer PMDs than with 
the current limitation to NUMA-local polling.



BR, J

Re: [ovs-discuss] Tenant multicast traffic on OpenStack with OVN

2017-09-08 Thread O'Reilly, Darragh
Hi Tom,

I can confirm that multicast works very well with Neutron ML2/OVS provider 
networks. See https://gist.github.com/djoreilly/a22ca4f38396e8867215fca0ad67fa28

I don’t know about OVN and multicast.

Regards,
Darragh.

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Tom Verdaat
Sent: 08 September 2017 10:26
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] Tenant multicast traffic on OpenStack with OVN

Hi all,
I've been looking into how we can enable multicast traffic for our OpenStack 
tenants. We once had this on an old OpenStack release that was using 
nova-networking and a flat network, but lost this when moving to a Neutron 
provider networking setup. Apparently - according to Cisco - it doesn't work 
with a regular Neutron ML2 + OVS setup due to what the neutron L3 agent does 
with namespaces and iptables.

In reply to an audience questions about it during the Boston OpenStack Summit 
talk on OVN somebody mentioned that OVS can handle multicast traffic just fine 
and that this means using OVN it could, potentially, also work for tenants. So 
with the new OpenStack Pike release, OVN active/passive HA support making it 
suitable for production and OVN doing away with all the Neutron agents 
(including the L3 agent?) things are looking up. Just wondering if anyone could 
tell me:

Would multicast work when using Neutron with OVN?

If so, would it work with provider networks, flat networking, or both?

Any insights into the state of multicast supoprt is greatly appreciated!

Thanks,
Tom
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Tenant multicast traffic on OpenStack with OVN

2017-09-08 Thread Numan Siddique
On Fri, Sep 8, 2017 at 4:36 PM, O'Reilly, Darragh 
wrote:

> Hi Tom,
>
>
>
> I can confirm that multicast works very well with Neutron ML2/OVS provider
> networks. See https://gist.github.com/djoreilly/
> a22ca4f38396e8867215fca0ad67fa28
>
>
>
> I don’t know about OVN and multicast.
>
>
>
> Regards,
>
> Darragh.
>
>
>
> *From:* ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-bounces@
> openvswitch.org] *On Behalf Of *Tom Verdaat
> *Sent:* 08 September 2017 10:26
> *To:* ovs-discuss@openvswitch.org
> *Subject:* [ovs-discuss] Tenant multicast traffic on OpenStack with OVN
>
>
>
> Hi all,
>
> I've been looking into how we can enable multicast traffic for our
> OpenStack tenants. We once had this on an old OpenStack release that was
> using nova-networking and a flat network, but lost this when moving to a
> Neutron provider networking setup. Apparently - according to Cisco - it
> doesn't work with a regular Neutron ML2 + OVS setup due to what the neutron
> L3 agent does with namespaces and iptables.
>
> In reply to an audience questions about it during the Boston OpenStack
> Summit talk on OVN somebody mentioned that OVS can handle multicast traffic
> just fine and that this means using OVN it could, potentially, also work
> for tenants. So with the new OpenStack Pike release, OVN active/passive HA
> support making it suitable for production and OVN doing away with all the
> Neutron agents (including the L3 agent?) things are looking up. Just
> wondering if anyone could tell me:
>
> Would multicast work when using Neutron with OVN?
>
>
>
> If so, would it work with provider networks, flat networking, or both?
>
>
>
> Any insights into the state of multicast supoprt is greatly appreciated!
>
>
>
> Thanks,
>
> Tom
>


Hi Tom,

OVN would treat multicast traffic as broadcast traffic. Although I have not
tested this.
Presently OVN doesn't support IGMP snooping.

Thanks
Numan


> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [ovs-ovn 2.7] How to find the compute node hosting l3 gateway router

2017-09-08 Thread Vikrant Aggarwal
Hi Team,

I have done the installation of packstack pike using ovn as mechanism
driver on centos. I have one controller and two compute nodes.

- Created one tenant geneve based network (added as port to router) and a
flat external network (set as gateway for router).

~~~
[root@controller ~(keystone_admin)]# rpm -qa | awk '/openvswitch-ovn/
{print $1}'
openvswitch-ovn-common-2.7.2-3.1fc27.el7.x86_64
openvswitch-ovn-host-2.7.2-3.1fc27.el7.x86_64
openvswitch-ovn-central-2.7.2-3.1fc27.el7.x86_64
~~~

I am trying to find a compute node on which my gateway router is hosted and
also the command to check the health of distributed logical routers.

It seems like that "lrp-get-gateway-chassis" command is not present in the
version which I am using.

~~~
[root@controller ~]# ovn-nbctl lrp-get-gateway-chassis
ovn-nbctl: unknown command 'lrp-get-gateway-chassis'; use --help for help

[root@controller ~]# ovn-nbctl --help | grep -i gateway

~~~

Output of ovn-nbctl show.

~~~

[root@controller ~(keystone_admin)]# ovn-nbctl show
switch 0d413d9c-7f23-4ace-9a8a-29817b3b33b5 (neutron-89113f8b-bc01-46b1-
84fb-edd5d606879c)
port 6fe3cab5-5f84-44c8-90f2-64c21b489c62
addresses: ["fa:16:3e:fa:d6:d3 10.10.10.9"]
port 397c019e-9bc3-49d3-ac4c-4aeeb1b3ba3e
addresses: ["router"]
port 4c72cee2-35b7-4bcd-8c77-135a22d16df1
addresses: ["fa:16:3e:55:3f:be 10.10.10.4"]
switch 1ec08997-0899-40d1-9b74-0a25ef476c00 (neutron-e411bbe8-e169-4268-
b2bf-d5959d9d7260)
port provnet-e411bbe8-e169-4268-b2bf-d5959d9d7260
addresses: ["unknown"]
port b95e9ae7-5c91-4037-8d2c-660d4af00974
addresses: ["router"]
router 7418a4e7-abff-4af7-85f5-6eea2ede9bea (neutron-67dc2e78-e109-4dac-
acce-b71b2c944dc1)
port lrp-b95e9ae7-5c91-4037-8d2c-660d4af00974
mac: "fa:16:3e:52:20:7c"
networks: ["192.168.122.50/24"]
port lrp-397c019e-9bc3-49d3-ac4c-4aeeb1b3ba3e
mac: "fa:16:3e:87:28:40"
networks: ["10.10.10.1/24"]
~~~


Thanks & Regards,
Vikrant Aggarwal
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Tenant multicast traffic on OpenStack with OVN

2017-09-08 Thread Tom Verdaat
Hi Numan,

Good to know. So OVN would not fix this automatically. At least, given the
info in Darragh's gist not when used in combination with GRE/VxLAN/Geneve.

Is this on the roadmap? Somebody working on it? Is there a ticket that we
can track?

Thanks,

Tom


2017-09-08 14:37 GMT+02:00 Numan Siddique :

>
>
> On Fri, Sep 8, 2017 at 4:36 PM, O'Reilly, Darragh  > wrote:
>
>> Hi Tom,
>>
>>
>>
>> I can confirm that multicast works very well with Neutron ML2/OVS
>> provider networks. See https://gist.github.com/djorei
>> lly/a22ca4f38396e8867215fca0ad67fa28
>>
>>
>>
>> I don’t know about OVN and multicast.
>>
>>
>>
>> Regards,
>>
>> Darragh.
>>
>>
>>
>> *From:* ovs-discuss-boun...@openvswitch.org [mailto:
>> ovs-discuss-boun...@openvswitch.org] *On Behalf Of *Tom Verdaat
>> *Sent:* 08 September 2017 10:26
>> *To:* ovs-discuss@openvswitch.org
>> *Subject:* [ovs-discuss] Tenant multicast traffic on OpenStack with OVN
>>
>>
>>
>> Hi all,
>>
>> I've been looking into how we can enable multicast traffic for our
>> OpenStack tenants. We once had this on an old OpenStack release that was
>> using nova-networking and a flat network, but lost this when moving to a
>> Neutron provider networking setup. Apparently - according to Cisco - it
>> doesn't work with a regular Neutron ML2 + OVS setup due to what the neutron
>> L3 agent does with namespaces and iptables.
>>
>> In reply to an audience questions about it during the Boston OpenStack
>> Summit talk on OVN somebody mentioned that OVS can handle multicast traffic
>> just fine and that this means using OVN it could, potentially, also work
>> for tenants. So with the new OpenStack Pike release, OVN active/passive HA
>> support making it suitable for production and OVN doing away with all the
>> Neutron agents (including the L3 agent?) things are looking up. Just
>> wondering if anyone could tell me:
>>
>> Would multicast work when using Neutron with OVN?
>>
>>
>>
>> If so, would it work with provider networks, flat networking, or both?
>>
>>
>>
>> Any insights into the state of multicast supoprt is greatly appreciated!
>>
>>
>>
>> Thanks,
>>
>> Tom
>>
>
>
> Hi Tom,
>
> OVN would treat multicast traffic as broadcast traffic. Although I have
> not tested this.
> Presently OVN doesn't support IGMP snooping.
>
> Thanks
> Numan
>
>
>> ___
>> discuss mailing list
>> disc...@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>
>>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port

2017-09-08 Thread O Mahony, Billy
Hi Wang,

https://mail.openvswitch.org/pipermail/ovs-dev/2017-August/337309.html

I see it's been acked and is due to be pushed to master with other changes on 
the dpdk merge branch so you'll have to apply it manually for now.

/Billy. 

> -Original Message-
> From: 王志克 [mailto:wangzh...@jd.com]
> Sent: Friday, September 8, 2017 11:48 AM
> To: ovs-...@openvswitch.org; Jan Scheurich
> ; O Mahony, Billy
> ; Darrell Ball ; ovs-
> disc...@openvswitch.org; Kevin Traynor 
> Subject: Re: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> physical port
> 
> Hi Billy,
> 
> I used ovs2.7.0. I searched the git log, and not sure which commit it is. Do 
> you
> happen to know?
> 
> Yes, I cleared the stats after traffic run.
> 
> Br,
> Wang Zhike
> 
> 
> From: "O Mahony, Billy" 
> To: "wangzh...@jd.com" , Jan Scheurich
>   , Darrell Ball ,
>   "ovs-discuss@openvswitch.org" ,
>   "ovs-...@openvswitch.org" , Kevin
> Traynor
>   
> Subject: Re: [ovs-dev] OVS DPDK NUMA pmd assignment question for
>   physical port
> Message-ID:
>   <03135aea779d444e90975c2703f148dc58c19...@irsmsx107.ger.c
> orp.intel.com>
> 
> Content-Type: text/plain; charset="utf-8"
> 
> Hi Wang,
> 
> Thanks for the figures. Unexpected results as you say. Two things come to
> mind:
> 
> I?m not sure what code you are using but the cycles per packet statistic was
> broken for a while recently. Ilya posted a patch to fix it so make sure you
> have that patch included.
> 
> Also remember to reset the pmd stats after you start your traffic and then
> measure after a short duration.
> 
> Regards,
> Billy.
> 
> 
> 
> From: ??? [mailto:wangzh...@jd.com]
> Sent: Friday, September 8, 2017 8:01 AM
> To: Jan Scheurich ; O Mahony, Billy
> ; Darrell Ball ; ovs-
> disc...@openvswitch.org; ovs-...@openvswitch.org; Kevin Traynor
> 
> Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> physical port
> 
> 
> Hi All,
> 
> 
> 
> I tested below cases, and get some performance data. The data shows there
> is little impact for cross NUMA communication, which is different from my
> expectation. (Previously I mentioned that cross NUMA would add 60%
> cycles, but I can NOT reproduce it any more).
> 
> 
> 
> @Jan,
> 
> You mentioned cross NUMA communication would cost lots more cycles. Can
> you share your data? I am not sure whether I made some mistake or not.
> 
> 
> 
> @All,
> 
> Welcome your data if you have data for similar cases. Thanks.
> 
> 
> 
> Case1: VM0->PMD0->NIC0
> 
> Case2:VM1->PMD1->NIC0
> 
> Case3:VM1->PMD0->NIC0
> 
> Case4:NIC0->PMD0->VM0
> 
> Case5:NIC0->PMD1->VM1
> 
> Case6:NIC0->PMD0->VM1
> 
> 
> 
> ? VM Tx Mpps  Host Tx Mpps  avg cycles per packet   avg processing
> cycles per packet
> 
> Case1 1.4   1.4 512 
> 415
> 
> Case2 1.3   1.3 537 
> 436
> 
> Case3 1.351.35   514 390
> 
> 
> 
> ?  VM Rx MppsHost Rx Mpps  avg cycles per packet   avg processing 
> cycles
> per packet
> 
> Case4 1.3   1.3 549 
> 533
> 
> Case5 1.3   1.3 559 
> 540
> 
> Case6 1.28 1.28  568 551
> 
> 
> 
> Br,
> 
> Wang Zhike
> 
> 
> 
> -Original Message-
> From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
> Sent: Wednesday, September 06, 2017 9:33 PM
> To: O Mahony, Billy; ???; Darrell Ball; ovs-
> disc...@openvswitch.org; ovs-
> d...@openvswitch.org; Kevin Traynor
> Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> physical port
> 
> 
> 
> Hi Billy,
> 
> 
> 
> > You are going to have to take the hit crossing the NUMA boundary at some
> point if your NIC and VM are on different NUMAs.
> 
> >
> 
> > So are you saying that it is more expensive to cross the NUMA boundary
> from the pmd to the VM that to cross it from the NIC to the
> 
> > PMD?
> 
> 
> 
> Indeed, that is the case: If the NIC crosses the QPI bus when storing packets
> in the remote NUMA there is no cost involved for the PMD. (The QPI
> bandwidth is typically not a bottleneck.) The PMD only performs local
> memory access.
> 
> 
> 
> On the other hand, if the PMD crosses the QPI when copying packets into a
> remote VM, there is a huge latency penalty involved, consuming lots of PMD
> cycles that cannot be spent on processing packets. We at Ericsson have
> observed exactly this behavior.
> 
> 
> 
> This latency penalty becomes even worse when the LLC cache hit rate is
> degraded due to LLC cache contention with real VNFs and/or unfavorable
> packet buffer re-use patterns as exhibited by real VNFs compared to typical
> synthetic benchmark apps like DPDK testpmd.
> 
> 
> 
> >
> 
> > If so then in that case you'd like to have two (for example) PMD

[ovs-discuss] How to build openvswitch with Ddpdk enabled debian package (openvswitch-switch-dpdk.2.7.2-1.deb) for OVS ver. 2.7.2

2017-09-08 Thread Chou, David J
Hi,

I tried to build openvswitch with dpdk enabled debian package 
(openvswitch-switch-dpdk.2.7.2-1.deb)  for OVS ver. 2.7.2.

I downloaded openvswitch-2.7.2.tar.gz from http://openvswitch.org/download, and 
dpdk-stable-16.11.2.targ.gz from 
http://dpdk.org/browse/dpdk-stable/tag/?h=v16.11.2. I could build and install 
dpdk and openvswitch with dpdk enabled by the "make install" method on Ubuntu 
16.04 LTS, and verify they could work together.

Also, by getting dpdk_16.11.2.orig.tar.gz and dpdk_16.11.2-4.debian.tar from 
https://packages.debian.org/source/sid/dpdk , I  could build dpdk-16.11.2-4 
debian package and I verified that this dpdk_16.11.2-4 debian package work.

Also, I  could build openvswitch without dpdk enabled debian package by 
following the instruction in debian.rst in openvswith _2.7.2.   But when I 
tried to build openvswitch with dpdk enabled by  following:

1.   Install dpdk_16.11.2-4 debian package I built in the my building system

2. export DATAPATH_CONFIGURE_OPTS="--with-dpdk=/usr"
Then build openvswitch debian package again, I saw some libdpdk built, and the 
build completed, but openvswitch-switch-dpdk.2.7.2-1.deb wasn't built while 
other openswitch debian package built.
What do I miss?  It seems to me that the debian spec (debian sub-directory  in 
openvswith 2.7.2) doesn't have all info necessary to build 
openvswitch-switch-dpdk.2.7.2-1.deb, am I right? How to fix this?

Thanks a lot.

Best regards,
David Chou


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss