Re: [ovs-discuss] ovn-controller is taking 100% CPU all the time in one deployment

2019-08-29 Thread Numan Siddique
On Fri, Aug 30, 2019 at 1:04 AM Han Zhou  wrote:

>
>
> On Thu, Aug 29, 2019 at 12:16 PM Numan Siddique 
> wrote:
>
>>
>>
>> On Fri, Aug 30, 2019 at 12:37 AM Han Zhou  wrote:
>>
>>>
>>>
>>> On Thu, Aug 29, 2019 at 11:40 AM Numan Siddique 
>>> wrote:
>>> >
>>> > Hello Everyone,
>>> >
>>> > In one of the OVN deployments, we are seeing 100% CPU usage by
>>> ovn-controllers all the time.
>>> >
>>> > After investigations we found the below
>>> >
>>> >  - ovn-controller is taking more than 20 seconds to complete full loop
>>> (mainly in lflow_run() function)
>>> >
>>> >  - The physical switch is sending GARPs periodically every 10 seconds.
>>> >
>>> >  - There is ovn-bridge-mappings configured and these GARP packets
>>> reaches br-int via the patch port.
>>> >
>>> >  - We have a flow in router pipeline which applies the action -
>>> put_arp
>>> > if it is arp packet.
>>> >
>>> >  - ovn-controller pinctrl thread receives these garps, stores the
>>> learnt mac-ips in the 'put_mac_bindings' hmap and notifies the
>>> ovn-controller main thread by incrementing the seq no.
>>> >
>>> >  - In the ovn-controller main thread, after lflow_run() finishes,
>>> pinctrl_wait() is called. This function calls - poll_immediate_wake() as
>>> 'put_mac_bindings' hmap is not empty.
>>> >
>>> > - This causes the ovn-controller poll_block() to not sleep at all and
>>> this repeats all the time resulting in 100% cpu usage.
>>> >
>>> > The deployment has OVS/OVN 2.9.  We have back ported the
>>> pinctrl_thread patch.
>>> >
>>> > Some time back I had reported an issue about lflow_run() taking lot of
>>> time -
>>> https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360414.html
>>> >
>>> > I think we need to improve the logical processing sooner or later.
>>> >
>>> > But to fix this issue urgently, we are thinking of the below approach.
>>> >
>>> >  - pinctrl_thread will locally cache the mac_binding entries (just
>>> like it caches the dns entries). (Please note pinctrl_thread can not access
>>> the SB DB IDL).
>>> >
>>> > - Upon receiving any arp packet (via the put_arp action),
>>> pinctrl_thread will check the local mac_binding cache and will only wake up
>>> the main ovn-controller thread only if the mac_binding update is required.
>>> >
>>> > This approach will solve the issue since the MAC sent by the physical
>>> switches will not change. So there is no need to wake up ovn-controller
>>> main thread.
>>> >
>>> > In the present master/2.12 these GARPs will not cause this 100% cpu
>>> loop issue because incremental processing will not recompute flows.
>>> >
>>> > Even though the above approach is not really required for master/2.12,
>>> I think it is still Ok to have this as there is no harm.
>>> >
>>> > I would like to know your comments and any concerns if any.
>>> >
>>> > Thanks
>>> > Numan
>>> >
>>>
>>> Hi Numan,
>>>
>>> I think this approach should work. Just to make sure, to update the
>>> cache efficiently (to avoid another kind of recompute), it should use ovsdb
>>> change-tracking to update it incrementally.
>>>
>>> Regarding master/2.12, it is not harmful except that it will add some
>>> more code and increase memory footprint. For our current use cases, there
>>> can be easily 10,000s mac_bindings, but it may still be ok because each
>>> entry is very small. However, is there any benefit for doing this in
>>> master/2.12?
>>>
>>
>> I don't see much benefit. But I can't submit a patch to branch 2.9
>> without the fix getting merged in master first right ?
>> May be once it is merged in branch 2.9, we can consider to delete it ?
>>
>> I think it is just about how would you maintain a downstream branch.
> Since it is downstream, I don't think you need a change to be in upstream
> before fixing a problem. In this case it may be *no harm*, but what if the
> upstream is completely changed and incompatible for such a fix any more? It
> shouldn't prevent you from fixing your downstream. (Of course it is better
> to not have downstream at all, but sometimes it is useful to have it for a
> temporary period, and since you (and us, too) are already there ... :)
>

The dowstream 2.9 what we have is - OVS 2.9.0 + a bunch of patches (to fix
issues) which are already merged upstream (preferably upstream branch or at
least upstream master).  Any downstream only patch is frowned upon. When we
updrade to 2.10 or higher versions there is  a risk of functional changes
if the patch is not upstream.

If we have apply the approach I described above to downstream 2.9 then
there is definitely some functional change. When such GARPs are received,
in the case of our downstream 2.9 we will not wake up ovn-controller main
thread
but with 2.12/master, we wake up the ovn-controller main thread.

I still see no harm in having this in upstream master. May be instead of
having a complete clone of mac_bindings, we can have a subset of
mac_bindings cached only if those mac_bindings are learnt by an
ovn-controller.

I will explore more.

Thanks
Numan

Re: [ovs-discuss] Configure update interval for STP counter under status field within Port table

2019-08-29 Thread Ben Pfaff
On Thu, Aug 29, 2019 at 06:02:10PM +0200, Dejan Pojbič wrote:
> is there a option where we could define interval time for updating STP
> counter under Port table with status field?
> 
> Something like you have it for Port/Interface/Mirror statistics:
> 
>other_config : stats-update-interval: optional  string,  containing  an
>integer, at least 5,000
>   Interval  for  updating statistics to the database, in millisec‐
>   onds. This option will affect the update of the statistics  col‐
>   umn in the following tables: Port, Interface , Mirror.
> 
>   Default value is 5000 ms.
> 
>   Getting statistics more frequently can be achieved via OpenFlow.
> 
> In case there isn't, what would be the best approach to introduce that?

If that setting doesn't already affect the STP counters update
frequency, then it might be easiest to just make it have that effect
too.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovn-controller is taking 100% CPU all the time in one deployment

2019-08-29 Thread Mark Michelson

On 8/29/19 2:39 PM, Numan Siddique wrote:

Hello Everyone,

In one of the OVN deployments, we are seeing 100% CPU usage by 
ovn-controllers all the time.


After investigations we found the below

  - ovn-controller is taking more than 20 seconds to complete full loop 
(mainly in lflow_run() function)


  - The physical switch is sending GARPs periodically every 10 seconds.

  - There is ovn-bridge-mappings configured and these GARP packets 
reaches br-int via the patch port.


  - We have a flow in router pipeline which applies the action - put_arp
if it is arp packet.

  - ovn-controller pinctrl thread receives these garps, stores the 
learnt mac-ips in the 'put_mac_bindings' hmap and notifies the 
ovn-controller main thread by incrementing the seq no.


  - In the ovn-controller main thread, after lflow_run() finishes, 
pinctrl_wait() is called. This function calls - poll_immediate_wake() as 
'put_mac_bindings' hmap is not empty.


- This causes the ovn-controller poll_block() to not sleep at all and 
this repeats all the time resulting in 100% cpu usage.


The deployment has OVS/OVN 2.9.  We have back ported the pinctrl_thread 
patch.


Some time back I had reported an issue about lflow_run() taking lot of 
time - https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360414.html


I think we need to improve the logical processing sooner or later.


I agree that this is very important. I know that logical flow processing 
is the biggest bottleneck for ovn-controller, but 20 seconds is just 
ridiculous. In your scale testing, you found that lflow_run() was taking 
10 seconds to complete.


I'm curious if there are any factors in this particular deployment's 
configuration that might contribute to this. For instance, does this 
deployment have a glut of ACLs? Are they not using port groups?


This particular deployment's configuration may give us a good scenario 
for our testing to improve lflow processing time.




But to fix this issue urgently, we are thinking of the below approach.

  - pinctrl_thread will locally cache the mac_binding entries (just like 
it caches the dns entries). (Please note pinctrl_thread can not access 
the SB DB IDL).




- Upon receiving any arp packet (via the put_arp action), pinctrl_thread 
will check the local mac_binding cache and will only wake up the main 
ovn-controller thread only if the mac_binding update is required.


This approach will solve the issue since the MAC sent by the physical 
switches will not change. So there is no need to wake up ovn-controller 
main thread.


I think this can work well. We have a lot of what's needed already in 
pinctrl at this point. We have the hash table of mac bindings already. 
Currently, we flush this table after we write the data to the southbound 
database. Instead, we would keep the bindings in memory. We would need 
to ensure that the in-memory MAC bindings eventually get deleted if they 
become stale.




In the present master/2.12 these GARPs will not cause this 100% cpu loop 
issue because incremental processing will not recompute flows.


Another mitigating factor for master is something I'm currently working 
on. I've got the beginnings of a patch series going where I am 
separating pinctrl into a separate process from ovn-controller: 
https://github.com/putnopvut/ovn/tree/pinctrl_process


It's in the early stages right now, so please don't judge :)

Separating pinctrl to its own process means that it cannot directly 
cause ovn-controller to wake up like it currently might.




Even though the above approach is not really required for master/2.12, I 
think it is still Ok to have this as there is no harm.


I would like to know your comments and any concerns if any.


Hm, I don't really understand why we'd want to put this in master/2.12 
if the problem doesn't exist there. The main concern I have is with 
regards to cache lifetime. I don't want to introduce potential memory 
growth concerns into a branch if it's not necessary.


Is there a way for us to get this included in 2.9-2.11 without having to 
put it in master or 2.12? It's hard to classify this as a bug fix, 
really, but it does prevent unwanted behavior in real-world setups. 
Could we get an opinion from committers on this?




Thanks
Numan


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovn-controller is taking 100% CPU all the time in one deployment

2019-08-29 Thread Han Zhou
On Thu, Aug 29, 2019 at 12:16 PM Numan Siddique  wrote:

>
>
> On Fri, Aug 30, 2019 at 12:37 AM Han Zhou  wrote:
>
>>
>>
>> On Thu, Aug 29, 2019 at 11:40 AM Numan Siddique 
>> wrote:
>> >
>> > Hello Everyone,
>> >
>> > In one of the OVN deployments, we are seeing 100% CPU usage by
>> ovn-controllers all the time.
>> >
>> > After investigations we found the below
>> >
>> >  - ovn-controller is taking more than 20 seconds to complete full loop
>> (mainly in lflow_run() function)
>> >
>> >  - The physical switch is sending GARPs periodically every 10 seconds.
>> >
>> >  - There is ovn-bridge-mappings configured and these GARP packets
>> reaches br-int via the patch port.
>> >
>> >  - We have a flow in router pipeline which applies the action - put_arp
>> > if it is arp packet.
>> >
>> >  - ovn-controller pinctrl thread receives these garps, stores the
>> learnt mac-ips in the 'put_mac_bindings' hmap and notifies the
>> ovn-controller main thread by incrementing the seq no.
>> >
>> >  - In the ovn-controller main thread, after lflow_run() finishes,
>> pinctrl_wait() is called. This function calls - poll_immediate_wake() as
>> 'put_mac_bindings' hmap is not empty.
>> >
>> > - This causes the ovn-controller poll_block() to not sleep at all and
>> this repeats all the time resulting in 100% cpu usage.
>> >
>> > The deployment has OVS/OVN 2.9.  We have back ported the pinctrl_thread
>> patch.
>> >
>> > Some time back I had reported an issue about lflow_run() taking lot of
>> time -
>> https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360414.html
>> >
>> > I think we need to improve the logical processing sooner or later.
>> >
>> > But to fix this issue urgently, we are thinking of the below approach.
>> >
>> >  - pinctrl_thread will locally cache the mac_binding entries (just like
>> it caches the dns entries). (Please note pinctrl_thread can not access the
>> SB DB IDL).
>> >
>> > - Upon receiving any arp packet (via the put_arp action),
>> pinctrl_thread will check the local mac_binding cache and will only wake up
>> the main ovn-controller thread only if the mac_binding update is required.
>> >
>> > This approach will solve the issue since the MAC sent by the physical
>> switches will not change. So there is no need to wake up ovn-controller
>> main thread.
>> >
>> > In the present master/2.12 these GARPs will not cause this 100% cpu
>> loop issue because incremental processing will not recompute flows.
>> >
>> > Even though the above approach is not really required for master/2.12,
>> I think it is still Ok to have this as there is no harm.
>> >
>> > I would like to know your comments and any concerns if any.
>> >
>> > Thanks
>> > Numan
>> >
>>
>> Hi Numan,
>>
>> I think this approach should work. Just to make sure, to update the cache
>> efficiently (to avoid another kind of recompute), it should use ovsdb
>> change-tracking to update it incrementally.
>>
>> Regarding master/2.12, it is not harmful except that it will add some
>> more code and increase memory footprint. For our current use cases, there
>> can be easily 10,000s mac_bindings, but it may still be ok because each
>> entry is very small. However, is there any benefit for doing this in
>> master/2.12?
>>
>
> I don't see much benefit. But I can't submit a patch to branch 2.9 without
> the fix getting merged in master first right ?
> May be once it is merged in branch 2.9, we can consider to delete it ?
>
> I think it is just about how would you maintain a downstream branch. Since
it is downstream, I don't think you need a change to be in upstream before
fixing a problem. In this case it may be *no harm*, but what if the
upstream is completely changed and incompatible for such a fix any more? It
shouldn't prevent you from fixing your downstream. (Of course it is better
to not have downstream at all, but sometimes it is useful to have it for a
temporary period, and since you (and us, too) are already there ... :)
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovn-controller is taking 100% CPU all the time in one deployment

2019-08-29 Thread Numan Siddique
On Fri, Aug 30, 2019 at 12:37 AM Han Zhou  wrote:

>
>
> On Thu, Aug 29, 2019 at 11:40 AM Numan Siddique 
> wrote:
> >
> > Hello Everyone,
> >
> > In one of the OVN deployments, we are seeing 100% CPU usage by
> ovn-controllers all the time.
> >
> > After investigations we found the below
> >
> >  - ovn-controller is taking more than 20 seconds to complete full loop
> (mainly in lflow_run() function)
> >
> >  - The physical switch is sending GARPs periodically every 10 seconds.
> >
> >  - There is ovn-bridge-mappings configured and these GARP packets
> reaches br-int via the patch port.
> >
> >  - We have a flow in router pipeline which applies the action - put_arp
> > if it is arp packet.
> >
> >  - ovn-controller pinctrl thread receives these garps, stores the learnt
> mac-ips in the 'put_mac_bindings' hmap and notifies the ovn-controller main
> thread by incrementing the seq no.
> >
> >  - In the ovn-controller main thread, after lflow_run() finishes,
> pinctrl_wait() is called. This function calls - poll_immediate_wake() as
> 'put_mac_bindings' hmap is not empty.
> >
> > - This causes the ovn-controller poll_block() to not sleep at all and
> this repeats all the time resulting in 100% cpu usage.
> >
> > The deployment has OVS/OVN 2.9.  We have back ported the pinctrl_thread
> patch.
> >
> > Some time back I had reported an issue about lflow_run() taking lot of
> time -
> https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360414.html
> >
> > I think we need to improve the logical processing sooner or later.
> >
> > But to fix this issue urgently, we are thinking of the below approach.
> >
> >  - pinctrl_thread will locally cache the mac_binding entries (just like
> it caches the dns entries). (Please note pinctrl_thread can not access the
> SB DB IDL).
> >
> > - Upon receiving any arp packet (via the put_arp action), pinctrl_thread
> will check the local mac_binding cache and will only wake up the main
> ovn-controller thread only if the mac_binding update is required.
> >
> > This approach will solve the issue since the MAC sent by the physical
> switches will not change. So there is no need to wake up ovn-controller
> main thread.
> >
> > In the present master/2.12 these GARPs will not cause this 100% cpu loop
> issue because incremental processing will not recompute flows.
> >
> > Even though the above approach is not really required for master/2.12, I
> think it is still Ok to have this as there is no harm.
> >
> > I would like to know your comments and any concerns if any.
> >
> > Thanks
> > Numan
> >
>
> Hi Numan,
>
> I think this approach should work. Just to make sure, to update the cache
> efficiently (to avoid another kind of recompute), it should use ovsdb
> change-tracking to update it incrementally.
>
> Regarding master/2.12, it is not harmful except that it will add some more
> code and increase memory footprint. For our current use cases, there can be
> easily 10,000s mac_bindings, but it may still be ok because each entry is
> very small. However, is there any benefit for doing this in master/2.12?
>

I don't see much benefit. But I can't submit a patch to branch 2.9 without
the fix getting merged in master first right ?
May be once it is merged in branch 2.9, we can consider to delete it ?

Thanks
Numan


>
> Thanks,
> Han
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Configure update interval for STP counter under status field within Port table

2019-08-29 Thread Dejan Pojbič
Hi,

is there a option where we could define interval time for updating STP
counter under Port table with status field?

Something like you have it for Port/Interface/Mirror statistics:

   other_config : stats-update-interval: optional  string,  containing  an
   integer, at least 5,000
  Interval  for  updating statistics to the database, in millisec‐
  onds. This option will affect the update of the statistics  col‐
  umn in the following tables: Port, Interface , Mirror.

  Default value is 5000 ms.

  Getting statistics more frequently can be achieved via OpenFlow.

In case there isn't, what would be the best approach to introduce that?

Thanks in advance,
Dejan Pojbic
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovn-controller is taking 100% CPU all the time in one deployment

2019-08-29 Thread Han Zhou
On Thu, Aug 29, 2019 at 11:40 AM Numan Siddique  wrote:
>
> Hello Everyone,
>
> In one of the OVN deployments, we are seeing 100% CPU usage by
ovn-controllers all the time.
>
> After investigations we found the below
>
>  - ovn-controller is taking more than 20 seconds to complete full loop
(mainly in lflow_run() function)
>
>  - The physical switch is sending GARPs periodically every 10 seconds.
>
>  - There is ovn-bridge-mappings configured and these GARP packets reaches
br-int via the patch port.
>
>  - We have a flow in router pipeline which applies the action - put_arp
> if it is arp packet.
>
>  - ovn-controller pinctrl thread receives these garps, stores the learnt
mac-ips in the 'put_mac_bindings' hmap and notifies the ovn-controller main
thread by incrementing the seq no.
>
>  - In the ovn-controller main thread, after lflow_run() finishes,
pinctrl_wait() is called. This function calls - poll_immediate_wake() as
'put_mac_bindings' hmap is not empty.
>
> - This causes the ovn-controller poll_block() to not sleep at all and
this repeats all the time resulting in 100% cpu usage.
>
> The deployment has OVS/OVN 2.9.  We have back ported the pinctrl_thread
patch.
>
> Some time back I had reported an issue about lflow_run() taking lot of
time - https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360414.html
>
> I think we need to improve the logical processing sooner or later.
>
> But to fix this issue urgently, we are thinking of the below approach.
>
>  - pinctrl_thread will locally cache the mac_binding entries (just like
it caches the dns entries). (Please note pinctrl_thread can not access the
SB DB IDL).
>
> - Upon receiving any arp packet (via the put_arp action), pinctrl_thread
will check the local mac_binding cache and will only wake up the main
ovn-controller thread only if the mac_binding update is required.
>
> This approach will solve the issue since the MAC sent by the physical
switches will not change. So there is no need to wake up ovn-controller
main thread.
>
> In the present master/2.12 these GARPs will not cause this 100% cpu loop
issue because incremental processing will not recompute flows.
>
> Even though the above approach is not really required for master/2.12, I
think it is still Ok to have this as there is no harm.
>
> I would like to know your comments and any concerns if any.
>
> Thanks
> Numan
>

Hi Numan,

I think this approach should work. Just to make sure, to update the cache
efficiently (to avoid another kind of recompute), it should use ovsdb
change-tracking to update it incrementally.

Regarding master/2.12, it is not harmful except that it will add some more
code and increase memory footprint. For our current use cases, there can be
easily 10,000s mac_bindings, but it may still be ok because each entry is
very small. However, is there any benefit for doing this in master/2.12?

Thanks,
Han
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] ovn-controller is taking 100% CPU all the time in one deployment

2019-08-29 Thread Numan Siddique
Hello Everyone,

In one of the OVN deployments, we are seeing 100% CPU usage by
ovn-controllers all the time.

After investigations we found the below

 - ovn-controller is taking more than 20 seconds to complete full loop
(mainly in lflow_run() function)

 - The physical switch is sending GARPs periodically every 10 seconds.

 - There is ovn-bridge-mappings configured and these GARP packets reaches
br-int via the patch port.

 - We have a flow in router pipeline which applies the action - put_arp
if it is arp packet.

 - ovn-controller pinctrl thread receives these garps, stores the learnt
mac-ips in the 'put_mac_bindings' hmap and notifies the ovn-controller main
thread by incrementing the seq no.

 - In the ovn-controller main thread, after lflow_run() finishes,
pinctrl_wait() is called. This function calls - poll_immediate_wake() as
'put_mac_bindings' hmap is not empty.

- This causes the ovn-controller poll_block() to not sleep at all and this
repeats all the time resulting in 100% cpu usage.

The deployment has OVS/OVN 2.9.  We have back ported the pinctrl_thread
patch.

Some time back I had reported an issue about lflow_run() taking lot of time
- https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360414.html

I think we need to improve the logical processing sooner or later.

But to fix this issue urgently, we are thinking of the below approach.

 - pinctrl_thread will locally cache the mac_binding entries (just like it
caches the dns entries). (Please note pinctrl_thread can not access the SB
DB IDL).

- Upon receiving any arp packet (via the put_arp action), pinctrl_thread
will check the local mac_binding cache and will only wake up the main
ovn-controller thread only if the mac_binding update is required.

This approach will solve the issue since the MAC sent by the physical
switches will not change. So there is no need to wake up ovn-controller
main thread.

In the present master/2.12 these GARPs will not cause this 100% cpu loop
issue because incremental processing will not recompute flows.

Even though the above approach is not really required for master/2.12, I
think it is still Ok to have this as there is no harm.

I would like to know your comments and any concerns if any.

Thanks
Numan
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVS+OVN '19: Call for Participation

2019-08-29 Thread Ben Pfaff
[previously sent to ovs-announce, sorry about the duplication]

The Open vSwitch and OVN projects will host their sixth annual
conference focused on Open vSwitch and OVN on December 10 and 11,
2019, in the Red Hat offices in Westford, Massachusetts (near Boston).

We are seeking long and short ("lightning") talks on topics related to
Open vSwitch and OVN.  We expect long talks to last 20 minutes with an
additional 5 minutes for questions, and short talks to last 5 minutes.

Topics that may be considered, among others, include:

 * The future of Open vSwitch (e.g., AF_XDP, P4, eBPF).

 * NAT, DPI, and stateful processing with Open vSwitch. 

 * Deploying and using OVN.

 * Testing and scaling OVN.

 * NIC acceleration of Open vSwitch.

 * Using Open vSwitch to realize NFV and service chaining.

 * Porting Open vSwitch to new operating systems, hypervisors,
   or container systems.

 * Integrating Open vSwitch into larger systems.

 * Troubleshooting and debugging Open vSwitch installations.

 * Open vSwitch development and testing practices.

 * Performance measurements or approaches to improving
   performance.

 * End-user or service provider experiences with Open vSwitch.

 * Hardware ports of Open vSwitch (existing, in progress, or
   speculative).

 * The relationship between OpenFlow and Open vSwitch.

 * Using, developing, or administering OpenFlow controllers in
   conjunction with Open vSwitch.

 * Comparisons to other implementations of features found in
   Open vSwitch (e.g. other OpenFlow implementations, other
   software switches, etc.).

 * Increasing the size and diversity of the Open vSwitch user
   and developer base.

 * Tutorials and walkthroughs of use cases.

 * Demos.

Talks will be recorded and made available online.


How to propose a talk
-

We are soliciting proposals for full talks and short ("lightning")
talks in a single round.  You may propose a talk as a full talk, a
lightning talk, or for either one at the committee's discretion.

We will also accept proposals for panel discussions.  Please submit
them as full talks and make it clear in the description that it is a
panel.

Please submit proposals to ovs...@openvswitch.org by September 23.
Proposals should include:

* Title and abstract.

* Speaker names, email addresses, and affiliations.

* Whether you are proposing a full talk or a lightning talk or
  either one at the committee's discretion.

Speakers will be notified of acceptance by October 7.

Speakers should plan to attend the event in person.  Travel and
accommodations are the responsibility of attendees.


How to attend
-

We expect to charge $200 for registration.  We offer complimentary
registration to speakers, students, academics, and anyone for whom the
registration fee is burdensome.  Please contact us if you need any
help obtaining a complimentary registration.

General registration is not yet open.  We will announce when it opens.


More information


To reach the organizers, email ovs...@openvswitch.org.  For general
discussion of the conference, please use the ovs-discuss mailing list
at disc...@openvswitch.org.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Another OVN integration with Kubernetes

2019-08-29 Thread 刘梦馨
Hi, all

We have implemented a Kubernetes network plugin that use OVN to manage the
container network, Kube-OVN https://github.com/alauda/kube-ovn.

This project has some similarities with ovn-kubernetes in function set and
architecture. The main difference is that ovn-kubernetes uses a
switch-per-node network topology while Kube-OVN  uses a
switch-per-namespace (namespace is a virtual cluster in Kubernetes concept)
topology. On top of that we implemented static ip, switch level ACL,
traffic mirroring functions.

We have developed and test this project for about six months and we want to
hear more suggestions from the community. Any thought about Kube-OVN are
welcome.

Regards,
Mengxin
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Policer Burst size

2019-08-29 Thread V Sai Surya Laxman Rao Bellala
Hello all,

How to determine the proper burst size for policers? So that policing
happens successfully.

Waiting for reply

Regards
Laxman
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [OVN] Does OVN support connection from one router by another

2019-08-29 Thread taoyunupt
Hi,Numan,


Thanks for you reply. This is the print of  "ovn-nbctl show"


switch 508ad343-913e-4a6d-9a91-23e9de468b68 
(neutron-030095c8-0caa-4f98-9520-632317b2c837) (aka tyx-net4)
port af792055-72af-4f7a-8484-66db316228f4
type: router
router-port: lrp-af792055-72af-4f7a-8484-66db316228f4
port 31b77956-94f3-421f-a2e8-50ff48893c23
type: router
router-port: lrp-31b77956-94f3-421f-a2e8-50ff48893c23




switch aba4a7d7-0b11-487b-9193-913a3a839632 
(neutron-5fcc9c4d-de46-4c8e-a10b-3ac79c33e5a2) (aka tyx-net3)
port fa890c59-004c-4e38-85e5-a65282ed5fc5
addresses: ["fa:16:3e:f9:3f:f6 192.168.3.4"]
port 63644656-cc45-4eff-87d1-6414c77556d6
type: router
router-port: lrp-63644656-cc45-4eff-87d1-6414c77556d6








switch b3e5075c-ce72-43ef-bd54-af0e03913d2a 
(neutron-b38f764a-bbb2-4cda-a3f4-7ad347d6be4a) (aka tyx-net5)
port 39d68607-c28d-437c-9f56-adcbe4ec6a05
type: router
router-port: lrp-39d68607-c28d-437c-9f56-adcbe4ec6a05
port a47ffe63-cc98-4c07-8abb-19692ba2d806
addresses: ["fa:16:3e:6a:89:47 192.168.5.28"]


router 024210fb-ed5d-4fda-88aa-2a332a962fd4 
(neutron-1994d60c-c7dc-4431-9b89-b42bb6288eb7) (aka tyx-router-ext4)
port lrp-39d68607-c28d-437c-9f56-adcbe4ec6a05
mac: "fa:16:3e:66:b2:e6"
networks: ["192.168.5.1/24"]
port lrp-31b77956-94f3-421f-a2e8-50ff48893c23
mac: "fa:16:3e:67:fc:74"
networks: ["192.168.4.7/24"]


router 2fcf7dfe-0658-4a01-a2e9-4bebc1089ad6 
(neutron-22d95dc6-1c03-4e9f-b50f-d08a311cf79c) (aka tyx-router-ext3)
port lrp-63644656-cc45-4eff-87d1-6414c77556d6
mac: "fa:16:3e:e7:e0:d3"
networks: ["192.168.3.1/24"]
port lrp-8e11cadb-6953-430d-a481-59bdd1f19c56
mac: "fa:16:3e:9c:f3:3a"
networks: ["10.142.174.27/24"]
gateway chassis: [09662100-c00c-414b-bf25-1c18e24bff62]
port lrp-af792055-72af-4f7a-8484-66db316228f4
mac: "fa:16:3e:5d:fe:30"
networks: ["192.168.4.1/24"]
nat 6f6e60ea-6277-45d7-a704-a4501180c8bb
external ip: "10.142.174.27"
logical ip: "192.168.4.0/24"
type: "snat"
nat 9931f064-59d6-4e2d-ab4c-ade06a3f296d
external ip: "10.142.174.27"
logical ip: "192.168.3.0/24"
type: "snat"





在 2019-08-29 14:23:05,"Numan Siddique"  写道:

Hi Yun,


It is supported.


From the ovn-trace, looks like it is getting dropped because of ACL rules.


Can you share output of "ovn-nbctl show"


Thanks
Numan




On Thu, Aug 29, 2019 at 11:48 AM taoyunupt  wrote:

Hi,
I try this feature by OVN/OVS 2.10 with OpenStack(Rocky), but failed. I 
have config static route for two routers.
The topology is as the following. The static route for  tyx-router3 is  
{"destination": "192.168.5.0/24", "nexthop": "192.168.4.7"} , for tyx-router4 
is  {"destination": "192.168.3.0/24", "nexthop": "192.168.4.1"}.
 
  
(192.168.3.4)vm1--tyx-net3--tyx-router-ext3--tyx-net4(192.168.4.1)--tyx-net4(192.168.4.7)---tyx-router-ext4---tyx-net5---vm3(92.168.5.28
 )



The following is the print of  'ovn-trace'


[root@ovn1 ~]# ovn-trace  tyx-net3  'inport == 
"fa890c59-004c-4e38-85e5-a65282ed5fc5" && eth.src == fa:16:3e:f9:3f:f6 && 
ip4.src == 192.168.3.4 && ip4.dst == 192.168.5.28  && eth.dst == 
fa:16:3e:e7:e0:d3 && icmp4.type == 8 && icmp4.code == 0 && ip.ttl == 64'
# 
icmp,reg14=0x2,vlan_tci=0x,dl_src=fa:16:3e:f9:3f:f6,dl_dst=fa:16:3e:e7:e0:d3,nw_src=192.168.3.4,nw_dst=192.168.5.28,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0


ingress(dp="tyx-net3", inport="fa890c")
---
 0. ls_in_port_sec_l2 (ovn-northd.c:4060): inport == "fa890c" && eth.src == 
{fa:16:3e:f9:3f:f6}, priority 50, uuid 9108f8d0
next;
 1. ls_in_port_sec_ip (ovn-northd.c:2815): inport == "fa890c" && eth.src == 
fa:16:3e:f9:3f:f6 && ip4.src == {192.168.3.4}, priority 90, uuid 8b1a9b58
next;
 3. ls_in_pre_acl (ovn-northd.c:3192): ip, priority 100, uuid c725e5e1
reg0[0] = 1;
next;
 5. ls_in_pre_stateful (ovn-northd.c:3319): reg0[0] == 1, priority 100, uuid 
82635bb8
ct_next;


ct_next(ct_state=est|trk /* default (use --ct to customize) */)
---
 6. ls_in_acl (ovn-northd.c:3506): !ct.new && ct.est && !ct.rpl && 
ct_label.blocked == 0 && (inport == @pg_e2c85897_5172_4f7e_8e8f_955e45fcfe4e && 
ip4), priority 2002, uuid e9514494
next;
16. ls_in_l2_lkup (ovn-northd.c:4435): eth.dst == fa:16:3e:e7:e0:d3, priority 
50, uuid 9b6212a8
outport = "636446";
output;


egress(dp="tyx-net3", inport="fa890c", outport="636446")

 1. ls_out_pre_acl (ovn-northd.c:3148): ip && outport == "636446", priority 
110, uuid 8f379e0c
next;
 9. ls_out_port_sec_l2 (ovn-northd.c:4518): outport == "636446", priority 50, 
uuid a567859a
output;
/* output to "636446", type "patch" */


ingress