[ovs-discuss] Python API documentation

2019-05-07 Thread Sergiy Lozovsky
Hi,

is there any documentation on Python binding (
https://github.com/openvswitch/ovs/tree/master/python )?

Is there any documentation on the underlying TCP/JSON protocol?

Thanks,

Sergiy.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] DNAT missing from datapath actions in ofproto/trace

2019-05-07 Thread Ben Pfaff
I investigated this a little bit by putting the following in a file
named 'flows':

table=0 ip,in_port=4,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.39.176.4,priority=3100, 
actions=resubmit(,25)
table=25, ip,vlan_tci=0x1000/0x1000,nw_dst=10.39.176.4, priority=3100, 
actions=set_field:00:00:5e:00:01:01->eth_src, 
set_field:a6:c1:a7:15:a4:3d->eth_dst, pop_vlan, resubmit(,28)
table=28, priority=100, actions=resubmit(,35)
table=35, ip,nw_dst=10.39.176.4, priority=3100 
actions=set_field:10.16.0.5->ip_dst, resubmit(,45)
table=45, priority=100, actions=resubmit(,50)
table=50, priority=100, actions=resubmit(,60)
table=60, priority=100, actions=resubmit(,62)
table=62, priority=100, actions=resubmit(,65)
table=65, ip,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.16.0.5, priority=3100, 
actions=p21

and the following in one named trace-test:

#! /bin/sh -v
ovs-vsctl add-br br0
ovs-vsctl add-port br0 p21
ovs-ofctl add-flows br0 flows
ovs-appctl ofproto/trace br0 
ip,dl_vlan=13,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.39.176.4,in_port=4

and then ran the latter inside "make sandbox".  I reproduced the
behavior.

When I look closer, I see that the logic inside OVS, in
commit_set_nw_action() in particular, uses the presence of a nonzero IP
protocol to determine whether there's actually an L3 header present.
Thus, specifying an IP protocol, e.g. with "icmp" or "tcp" or "udp",
makes the issue go away.

This is definitely surprising behavior.  I am not sure whether the
nonzero IP protocol number check is actually necessary.  If I remove it,
it does fix this problem.  That may be the correct fix.

On Tue, May 07, 2019 at 04:12:10PM -0600, Carl Baldwin wrote:
> Greetings,
> 
> I have been using ofproto/trace to verify the actions taken on various
> packets through an OVS bridge. My methodology is basically to specify
> the action in br_flow format to ovs-appctl ofproto/trace and then
> comparing the datapath actions line with expected actions. I ran into
> an interesting case. I'm reaching out to the mailing list to try to
> get an idea of what is happening and if this is expected behavior.
> 
> Thanks in advance for your time to consider this. Details follow.
> 
> I was testing a particular path that results in rewriting the ip_dst
> of the packet (a stateless DNAT). In my first attempt, the datapath
> actions line doesn't include the DNAT  though you can see the
> set_field action in table 35 in the output below.
> 
> This makes verifying the datapath actions problematic because an
> important action is missing. The backup plan here would be to scrape
> the actions from the output by looking for keywords like set_field,
> pop_vlan, and output in the more verbose trace output. Would you
> advise doing this instead of using the datapath actions line?
> 
> $ ovs-appctl ofproto/trace br0
> ip,dl_vlan=13,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.39.176.4,in_port=4
> Flow: 
> ip,in_port=4,dl_vlan=13,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.39.176.4,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> 
> bridge("br0")
> -
>  0. ip,in_port=4,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.39.176.4,
> priority 3100, cookie 0x1dfd900060
> resubmit(,25)
> 25. ip,vlan_tci=0x1000/0x1000,nw_dst=10.39.176.4, priority 3100,
> cookie 0x1dfd900060
> set_field:00:00:5e:00:01:01->eth_src
> set_field:a6:c1:a7:15:a4:3d->eth_dst
> pop_vlan
> resubmit(,28)
> 28. priority 100
> resubmit(,35)
> 35. ip,nw_dst=10.39.176.4, priority 3100, cookie 0x1dfd900060
> set_field:10.16.0.5->ip_dst
> resubmit(,45)
> 45. priority 100
> resubmit(,50)
> 50. priority 100
> resubmit(,60)
> 60. priority 100
> resubmit(,62)
> 62. priority 100
> resubmit(,65)
> 65. ip,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.16.0.5, priority 3100,
> cookie 0x1dfd900060
> output:21
> 
> Final flow:
> ip,in_port=4,vlan_tci=0x,dl_src=00:00:5e:00:01:01,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.16.0.5,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> Megaflow: 
> recirc_id=0,eth,ip,tun_id=0,in_port=4,dl_vlan=13,dl_vlan_pcp=0,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=0.0.0.0/5,nw_dst=10.39.176.4,nw_frag=no
> Datapath actions:
> set(eth(src=00:00:5e:00:01:01,dst=a6:c1:a7:15:a4:3d)),pop_vlan,7
> 
> By accident, I discovered that if I replaced ip with icmp in the
> br_flow spec then I got another unexpected result. The DNAT shows up
> on the datapath actions line. However, it also sets the ip src
> (src=0.0.0.0/248.0.0.0). This appears to coincide with a match in the
> megaflow (nw_src=0.0.0.0/5) effectively making it a no-op. If I change
> the nw_src value in the original br_flow, then I get different values
> for these but it always boils down to essentially a no-op.
> 
> This makes verifying the datapath

Re: [ovs-discuss] Conntrack and unexpected change in source IP

2019-05-07 Thread Ben Pfaff
On Tue, May 07, 2019 at 02:30:25PM -0700, Thiago Santos wrote:
> On Tue, May 7, 2019 at 1:58 PM Ben Pfaff  wrote:
> 
> > On Tue, May 07, 2019 at 12:05:43PM -0700, Thiago Santos wrote:
> > > Hello,
> > >
> > > I've been using OVS Conntrack integration for Source NAT and setting the
> > > Destination IP directly but this is having the side effect of overwriting
> > > the Conntrack set SNAT IP. I simplified my rules to look like this to
> > > reproduce the problem:
> > >
> > > cookie=0x0, duration=90070.633s, table=0, n_packets=32266,
> > > n_bytes=48644792, idle_age=1716, hard_age=65534,
> > > ip,in_port=1,nw_dst=1.1.1.2 actions=ct(table=1,zone=2,nat)
> > > cookie=0x0, duration=89993.501s, table=1, n_packets=32266,
> > > n_bytes=48644792, idle_age=1716, hard_age=65534,
> > > ct_state=+new+trk,ip,in_port=1,nw_dst=1.1.1.2
> > > actions=ct(commit,zone=2,nat(src=10.1.1.1)),resubmit(,2)
> > > cookie=0x0, duration=1757.194s, table=2, n_packets=0, n_bytes=0,
> > > idle_age=1757, priority=601,ip,nw_src=10.10.10.10 actions=drop
> > > cookie=0x0, duration=1808.236s, table=2, n_packets=5361, n_bytes=8105832,
> > > idle_age=1716, priority=600,ip actions=mod_nw_dst:10.1.1.2,output:2
> > >
> > > If I change the last 2 rules priorities so that they are in reverse
> > order,
> > > it seems to work.
> > >
> > > ovs-appctl dpctl/dump-flows shows me this:
> > > recirc_id(0x2),ct_state(+new+trk),eth(),eth_type(0x0800),ipv4(src=
> > > 0.0.0.0/248.0.0.0,dst=1.1.1.2,frag=no), packets:59, bytes:89208,
> > > used:0.168s, actions:ct(commit,zone=2,nat(src=10.1.1.1)),set(ipv4(src=
> > > 0.0.0.0/248.0.0.0,dst=10.1.1.2)),4
> > >
> > > So it looks like it is doing a set on the source IP because of the
> > matching
> > > on source IP of the 3rd rule above. Is there a way around this or am I
> > > doing something wrong?
> >
> > Here's an easier to read table:
> >
> > 0 32768 ip,in_port=1,nw_dst=1.1.1.2
> > actions=ct(table=1,zone=2,nat)
> >
> > 1 32768 ct_state=+new+trk,ip,in_port=1,nw_dst=1.1.1.2
> > actions=ct(commit,zone=2,nat(src=10.1.1.1)),resubmit(,2)
> >
> > 1   601 ip,nw_src=10.10.10.10
> > actions=drop
> >
> > 1   600 ip
> > actions=mod_nw_dst:10.1.1.2,output:2
> >
> > Indeed, if nw_src is 10.10.10.10, the packet gets dropped.  That's what
> > the flow table says.  Can you explain what you expect?
> >
> 
> If the packet is to hit the last rule, sending a packet with source IP
> 1.1.1.1 and destination IP 1.1.1.2, the result should be a packet with
> source ip 10.1.1.1 and destination ip 10.1.1.2 but the source IP ends up
> being 2.1.1.1 because of the masked source IP set in the datapath rule.
> 
> I believe it builds that without considering what the conntrack would be
> setting to the source IP when it does the NAT.

Thanks for the clarifications, I missed some things.  ct isn't my
specialty.

This use of ct breaks the rules, documented in ovs-action(7), which say
that 'commit' and 'nat' may not be used together.  I don't know what is
supposed to happen in this case.  Maybe OVS should disallow it entirely,
or maybe the documentation should be updated.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] DNAT missing from datapath actions in ofproto/trace

2019-05-07 Thread Carl Baldwin
Greetings,

I have been using ofproto/trace to verify the actions taken on various
packets through an OVS bridge. My methodology is basically to specify
the action in br_flow format to ovs-appctl ofproto/trace and then
comparing the datapath actions line with expected actions. I ran into
an interesting case. I'm reaching out to the mailing list to try to
get an idea of what is happening and if this is expected behavior.

Thanks in advance for your time to consider this. Details follow.

I was testing a particular path that results in rewriting the ip_dst
of the packet (a stateless DNAT). In my first attempt, the datapath
actions line doesn't include the DNAT  though you can see the
set_field action in table 35 in the output below.

This makes verifying the datapath actions problematic because an
important action is missing. The backup plan here would be to scrape
the actions from the output by looking for keywords like set_field,
pop_vlan, and output in the more verbose trace output. Would you
advise doing this instead of using the datapath actions line?

$ ovs-appctl ofproto/trace br0
ip,dl_vlan=13,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.39.176.4,in_port=4
Flow: 
ip,in_port=4,dl_vlan=13,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.39.176.4,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0

bridge("br0")
-
 0. ip,in_port=4,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.39.176.4,
priority 3100, cookie 0x1dfd900060
resubmit(,25)
25. ip,vlan_tci=0x1000/0x1000,nw_dst=10.39.176.4, priority 3100,
cookie 0x1dfd900060
set_field:00:00:5e:00:01:01->eth_src
set_field:a6:c1:a7:15:a4:3d->eth_dst
pop_vlan
resubmit(,28)
28. priority 100
resubmit(,35)
35. ip,nw_dst=10.39.176.4, priority 3100, cookie 0x1dfd900060
set_field:10.16.0.5->ip_dst
resubmit(,45)
45. priority 100
resubmit(,50)
50. priority 100
resubmit(,60)
60. priority 100
resubmit(,62)
62. priority 100
resubmit(,65)
65. ip,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.16.0.5, priority 3100,
cookie 0x1dfd900060
output:21

Final flow:
ip,in_port=4,vlan_tci=0x,dl_src=00:00:5e:00:01:01,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.16.0.5,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
Megaflow: 
recirc_id=0,eth,ip,tun_id=0,in_port=4,dl_vlan=13,dl_vlan_pcp=0,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=0.0.0.0/5,nw_dst=10.39.176.4,nw_frag=no
Datapath actions:
set(eth(src=00:00:5e:00:01:01,dst=a6:c1:a7:15:a4:3d)),pop_vlan,7

By accident, I discovered that if I replaced ip with icmp in the
br_flow spec then I got another unexpected result. The DNAT shows up
on the datapath actions line. However, it also sets the ip src
(src=0.0.0.0/248.0.0.0). This appears to coincide with a match in the
megaflow (nw_src=0.0.0.0/5) effectively making it a no-op. If I change
the nw_src value in the original br_flow, then I get different values
for these but it always boils down to essentially a no-op.

This makes verifying the datapath actions problematic because I'd have
to write extra logic to detect and remove no-ops. I could bake the
no-op actions into my expected actions but I'm not confident that the
internal implementation won't change and leave me with a bunch of
failures to fix up later. This is internal implementation leaking into
the datapath actions. The same backup plan of scraping the output for
action lines and ignoring the datapath actions line would work here
too.

$ ovs-appctl ofproto/trace br0
icmp,dl_vlan=13,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.39.176.4,in_port=4
Flow: 
icmp,in_port=4,dl_vlan=13,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=d8:18:d3:fd:66:40,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.39.176.4,nw_tos=0,nw_ecn=0,nw_ttl=0,icmp_type=0,icmp_code=0

bridge("br0")
-
 0. ip,in_port=4,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.39.176.4,
priority 3100, cookie 0x1dfd900060
resubmit(,25)
25. ip,vlan_tci=0x1000/0x1000,nw_dst=10.39.176.4, priority 3100,
cookie 0x1dfd900060
set_field:00:00:5e:00:01:01->eth_src
set_field:a6:c1:a7:15:a4:3d->eth_dst
pop_vlan
resubmit(,28)
28. priority 100
resubmit(,35)
35. ip,nw_dst=10.39.176.4, priority 3100, cookie 0x1dfd900060
set_field:10.16.0.5->ip_dst
resubmit(,45)
45. priority 100
resubmit(,50)
50. priority 100
resubmit(,60)
60. priority 100
resubmit(,62)
62. priority 100
resubmit(,65)
65. ip,dl_dst=a6:c1:a7:15:a4:3d,nw_dst=10.16.0.5, priority 3100,
cookie 0x1dfd900060
output:21

Final flow:
icmp,in_port=4,vlan_tci=0x,dl_src=00:00:5e:00:01:01,dl_dst=a6:c1:a7:15:a4:3d,nw_src=1.1.1.1,nw_dst=10.16.0.5,nw_tos=0,nw_ecn=0,nw_ttl=0,icmp_type=0,icmp_code=0
 

Re: [ovs-discuss] Conntrack and unexpected change in source IP

2019-05-07 Thread Thiago Santos
On Tue, May 7, 2019 at 1:58 PM Ben Pfaff  wrote:

> On Tue, May 07, 2019 at 12:05:43PM -0700, Thiago Santos wrote:
> > Hello,
> >
> > I've been using OVS Conntrack integration for Source NAT and setting the
> > Destination IP directly but this is having the side effect of overwriting
> > the Conntrack set SNAT IP. I simplified my rules to look like this to
> > reproduce the problem:
> >
> > cookie=0x0, duration=90070.633s, table=0, n_packets=32266,
> > n_bytes=48644792, idle_age=1716, hard_age=65534,
> > ip,in_port=1,nw_dst=1.1.1.2 actions=ct(table=1,zone=2,nat)
> > cookie=0x0, duration=89993.501s, table=1, n_packets=32266,
> > n_bytes=48644792, idle_age=1716, hard_age=65534,
> > ct_state=+new+trk,ip,in_port=1,nw_dst=1.1.1.2
> > actions=ct(commit,zone=2,nat(src=10.1.1.1)),resubmit(,2)
> > cookie=0x0, duration=1757.194s, table=2, n_packets=0, n_bytes=0,
> > idle_age=1757, priority=601,ip,nw_src=10.10.10.10 actions=drop
> > cookie=0x0, duration=1808.236s, table=2, n_packets=5361, n_bytes=8105832,
> > idle_age=1716, priority=600,ip actions=mod_nw_dst:10.1.1.2,output:2
> >
> > If I change the last 2 rules priorities so that they are in reverse
> order,
> > it seems to work.
> >
> > ovs-appctl dpctl/dump-flows shows me this:
> > recirc_id(0x2),ct_state(+new+trk),eth(),eth_type(0x0800),ipv4(src=
> > 0.0.0.0/248.0.0.0,dst=1.1.1.2,frag=no), packets:59, bytes:89208,
> > used:0.168s, actions:ct(commit,zone=2,nat(src=10.1.1.1)),set(ipv4(src=
> > 0.0.0.0/248.0.0.0,dst=10.1.1.2)),4
> >
> > So it looks like it is doing a set on the source IP because of the
> matching
> > on source IP of the 3rd rule above. Is there a way around this or am I
> > doing something wrong?
>
> Here's an easier to read table:
>
> 0 32768 ip,in_port=1,nw_dst=1.1.1.2
> actions=ct(table=1,zone=2,nat)
>
> 1 32768 ct_state=+new+trk,ip,in_port=1,nw_dst=1.1.1.2
> actions=ct(commit,zone=2,nat(src=10.1.1.1)),resubmit(,2)
>
> 1   601 ip,nw_src=10.10.10.10
> actions=drop
>
> 1   600 ip
> actions=mod_nw_dst:10.1.1.2,output:2
>
> Indeed, if nw_src is 10.10.10.10, the packet gets dropped.  That's what
> the flow table says.  Can you explain what you expect?
>

If the packet is to hit the last rule, sending a packet with source IP
1.1.1.1 and destination IP 1.1.1.2, the result should be a packet with
source ip 10.1.1.1 and destination ip 10.1.1.2 but the source IP ends up
being 2.1.1.1 because of the masked source IP set in the datapath rule.

I believe it builds that without considering what the conntrack would be
setting to the source IP when it does the NAT.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [OVN] Incremental processing patches

2019-05-07 Thread Han Zhou
On Tue, May 7, 2019 at 9:56 AM Justin Pettit  wrote:
>
> Hi, Daniel.  I don't think this is a bad approach.  However, at the
moment, Han's work is with ovn-controller and ddlog is with ovn-northd.
We've talked about using ddlog in ovn-controller, but there's been no work
in that direction yet.  At this point, I think Han's patches should be
evaluated on their own merits and not held back for ddlog.  (Unless someone
is looking to run with it.  :-) )
>
> --Justin
>
>
> > On May 7, 2019, at 9:04 AM, Daniel Alvarez Sanchez 
wrote:
> >
> > Hi folks,
> >
> > After some conversations with Han (thanks for your time and great
> > talk!) at the Open Infrastructure Summit in Denver last week, here I
> > go with this - somehow crazy - idea.
> >
> > Since DDlog approach for incremental processing is not going to happen
> > soon and Han's reported his patches to be working quite well and seem
> > to be production ready (please correct me if I'm wrong Han), would it
> > be possible to somehow enable those and then drop it after DDlog is in
> > a good shape?
> >
> > Han keeps rebasing them [0] and I know we could use them but I think
> > that the whole OVN project would get better adoption if we could have
> > them in place in the main repo. The main downside is its complexity
> > but I wonder if we can live with it until DDlog becomes a reality.
> >
> > Apologies in advance if this is just a terrible idea since I'm not
> > fully aware of what this exactly involves in terms of technical
> > complexity and feasibility. I believe that it'll make DDlog harder as
> > it'll have to deal with both approaches at the same time but looks
> > like the performance benefits are huge so worth to at least consider
> > it?
> >
> > Thanks a lot!
> > Daniel
> >

Thanks Daniel and Justin!

Daniel, I can't guarantee anything, but from my perspective the patches are
production ready, at least it has been used in eBay for months without any
issues. Without it we would have been suffering from the high CPU load on
all OVN HVs. For your concerns with DDlog, it should not make DDlog harder.
Instead, it should help, since the dependencies will be much more clear,
which is needed for rewriting with DDlog, and makes it more straightforward.

For the complexity, it surely is more complex than just simply recomputing,
and supposed to be more complex than DDlog, too. However, I don't think it
is really too complex to maintain - at least I was able to rebase it from
version to version (although rebasing such big change is always a pain for
me). What's special for this approach to stay maintainable is that the
change_handler implementation is optional, so that we can choose to
implement the simple but effective ones first, and leave the less effective
but complex ones in the future, or never - at least we don't lose anything.
The current version, with most frequent changes handled incrementally, is
effective and not very complex, from my perspective.

I think Justin had a good point that the patches should be evaluated on
their own, without considering DDlog. Although I support DDlog in the long
run, I didn't get time to start any serious work on it yet (my apology). I
want to work on that, but now that's not the highest priority for the OVN
scalability since my incremental patches solved most problems for me w.r.t
ovn-controller. I think maybe we should wait and see how DDlog works for
northd first, so that we get more experience with DDlog and avoid pitfalls.
I guess even if we start full speed on rewriting ovn-controller with DDlog,
it may take us at least one more release cycle to achieve same stability
and performance as the current incremental processing patches, if not too
optimistic. So I'd support upstreaming the incremental processing patches
first, regardless of the DDlog schedule. And I admit there is a strong
personal reason: maintaining branches and keep rebasing is not fun at all.

I'd like to hear more from Ben and Mark, who both worked on this with me
and helped a lot.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Conntrack and unexpected change in source IP

2019-05-07 Thread Ben Pfaff
On Tue, May 07, 2019 at 12:05:43PM -0700, Thiago Santos wrote:
> Hello,
> 
> I've been using OVS Conntrack integration for Source NAT and setting the
> Destination IP directly but this is having the side effect of overwriting
> the Conntrack set SNAT IP. I simplified my rules to look like this to
> reproduce the problem:
> 
> cookie=0x0, duration=90070.633s, table=0, n_packets=32266,
> n_bytes=48644792, idle_age=1716, hard_age=65534,
> ip,in_port=1,nw_dst=1.1.1.2 actions=ct(table=1,zone=2,nat)
> cookie=0x0, duration=89993.501s, table=1, n_packets=32266,
> n_bytes=48644792, idle_age=1716, hard_age=65534,
> ct_state=+new+trk,ip,in_port=1,nw_dst=1.1.1.2
> actions=ct(commit,zone=2,nat(src=10.1.1.1)),resubmit(,2)
> cookie=0x0, duration=1757.194s, table=2, n_packets=0, n_bytes=0,
> idle_age=1757, priority=601,ip,nw_src=10.10.10.10 actions=drop
> cookie=0x0, duration=1808.236s, table=2, n_packets=5361, n_bytes=8105832,
> idle_age=1716, priority=600,ip actions=mod_nw_dst:10.1.1.2,output:2
> 
> If I change the last 2 rules priorities so that they are in reverse order,
> it seems to work.
> 
> ovs-appctl dpctl/dump-flows shows me this:
> recirc_id(0x2),ct_state(+new+trk),eth(),eth_type(0x0800),ipv4(src=
> 0.0.0.0/248.0.0.0,dst=1.1.1.2,frag=no), packets:59, bytes:89208,
> used:0.168s, actions:ct(commit,zone=2,nat(src=10.1.1.1)),set(ipv4(src=
> 0.0.0.0/248.0.0.0,dst=10.1.1.2)),4
> 
> So it looks like it is doing a set on the source IP because of the matching
> on source IP of the 3rd rule above. Is there a way around this or am I
> doing something wrong?

Here's an easier to read table:

0 32768 ip,in_port=1,nw_dst=1.1.1.2
actions=ct(table=1,zone=2,nat)

1 32768 ct_state=+new+trk,ip,in_port=1,nw_dst=1.1.1.2
actions=ct(commit,zone=2,nat(src=10.1.1.1)),resubmit(,2)

1   601 ip,nw_src=10.10.10.10
actions=drop

1   600 ip
actions=mod_nw_dst:10.1.1.2,output:2 

Indeed, if nw_src is 10.10.10.10, the packet gets dropped.  That's what
the flow table says.  Can you explain what you expect?
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Conntrack and unexpected change in source IP

2019-05-07 Thread Thiago Santos
Hello,

I've been using OVS Conntrack integration for Source NAT and setting the
Destination IP directly but this is having the side effect of overwriting
the Conntrack set SNAT IP. I simplified my rules to look like this to
reproduce the problem:

cookie=0x0, duration=90070.633s, table=0, n_packets=32266,
n_bytes=48644792, idle_age=1716, hard_age=65534,
ip,in_port=1,nw_dst=1.1.1.2 actions=ct(table=1,zone=2,nat)
cookie=0x0, duration=89993.501s, table=1, n_packets=32266,
n_bytes=48644792, idle_age=1716, hard_age=65534,
ct_state=+new+trk,ip,in_port=1,nw_dst=1.1.1.2
actions=ct(commit,zone=2,nat(src=10.1.1.1)),resubmit(,2)
cookie=0x0, duration=1757.194s, table=2, n_packets=0, n_bytes=0,
idle_age=1757, priority=601,ip,nw_src=10.10.10.10 actions=drop
cookie=0x0, duration=1808.236s, table=2, n_packets=5361, n_bytes=8105832,
idle_age=1716, priority=600,ip actions=mod_nw_dst:10.1.1.2,output:2

If I change the last 2 rules priorities so that they are in reverse order,
it seems to work.

ovs-appctl dpctl/dump-flows shows me this:
recirc_id(0x2),ct_state(+new+trk),eth(),eth_type(0x0800),ipv4(src=
0.0.0.0/248.0.0.0,dst=1.1.1.2,frag=no), packets:59, bytes:89208,
used:0.168s, actions:ct(commit,zone=2,nat(src=10.1.1.1)),set(ipv4(src=
0.0.0.0/248.0.0.0,dst=10.1.1.2)),4

So it looks like it is doing a set on the source IP because of the matching
on source IP of the 3rd rule above. Is there a way around this or am I
doing something wrong?

Thanks,
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [OVN] Incremental processing patches

2019-05-07 Thread Justin Pettit
Hi, Daniel.  I don't think this is a bad approach.  However, at the moment, 
Han's work is with ovn-controller and ddlog is with ovn-northd.  We've talked 
about using ddlog in ovn-controller, but there's been no work in that direction 
yet.  At this point, I think Han's patches should be evaluated on their own 
merits and not held back for ddlog.  (Unless someone is looking to run with it. 
 :-) )

--Justin


> On May 7, 2019, at 9:04 AM, Daniel Alvarez Sanchez  
> wrote:
> 
> Hi folks,
> 
> After some conversations with Han (thanks for your time and great
> talk!) at the Open Infrastructure Summit in Denver last week, here I
> go with this - somehow crazy - idea.
> 
> Since DDlog approach for incremental processing is not going to happen
> soon and Han's reported his patches to be working quite well and seem
> to be production ready (please correct me if I'm wrong Han), would it
> be possible to somehow enable those and then drop it after DDlog is in
> a good shape?
> 
> Han keeps rebasing them [0] and I know we could use them but I think
> that the whole OVN project would get better adoption if we could have
> them in place in the main repo. The main downside is its complexity
> but I wonder if we can live with it until DDlog becomes a reality.
> 
> Apologies in advance if this is just a terrible idea since I'm not
> fully aware of what this exactly involves in terms of technical
> complexity and feasibility. I believe that it'll make DDlog harder as
> it'll have to deal with both approaches at the same time but looks
> like the performance benefits are huge so worth to at least consider
> it?
> 
> Thanks a lot!
> Daniel
> 
> [0] https://github.com/hzhou8/ovs/tree/ip12_rebased_mar29
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Handling conf.db ownership on OVS_USER_ID changes

2019-05-07 Thread Numan Siddique
On Mon, May 6, 2019 at 6:04 PM Aaron Conole  wrote:

> Jaime Caamaño Ruiz  writes:
>
> >> Agree. I will try to address this issue. I think we can have a
> >> separate
> >> run time/log directory for OVN. ovn-controller needs to talk to the
> >> local ovsdb-server
> >> and br-int.mgmt and other related socket interfaces, so it needs
> >> access
> >> to
> >> the /var/run/openvswitch/ folder. I think we can solve this.
> >
> > I have a (yet to be submitted patch) on my side that changes OVN to run
> > as the same user as OVS. Thought it as less disruptive than also
> > changing the log directory.
>
>
Great. Looking forward for the patch.

Thanks


> That sounds interesting.
>
> > openvswitch-ipsec still logs as root though and I dont see a way around
> > that so it probably needs to log to a different directory.
> >
> > Jaime.
> >
> > -Original Message-
> > From: Numan Siddique 
> > To: Aaron Conole 
> > Cc: Jaime Caamaño Ruiz , ovs-discuss  > nvswitch.org>
> > Subject: Re: [ovs-discuss] Handling conf.db ownership on OVS_USER_ID
> > changes
> > Date: Thu, 2 May 2019 19:20:28 +0530
> >
> >
> >
> >
> > On Mon, Apr 29, 2019, 9:36 PM Aaron Conole  wrote:
> >> Jaime Caamaño Ruiz  writes:
> >>
> >> >> As a "security concern" you mean something among the lines where
> >> one
> >> >> of ovs-* processes running under openvswitch user would go ahead
> >> and
> >> >> create a file with its owner that later one of ovn processes would
> >> >> blindly reuse without checking that it actually belongs to
> >> root:root?
> >> >
> >> > One example is that the OVS user could create link as any OVN log
> >> file
> >> > to other file owned by root and then OVN would write to that file.
> >>
> >> There's a long-standing issue of OVN running as root.  I don't see
> >> any
> >> reason it should.
> >
> > Agree. I will try to address this issue. I think we can have a separate
> > run time/log directory for OVN. ovn-controller needs to talk to the
> > local ovsdb-server
> > and br-int.mgmt and other related socket interfaces, so it needs access
> > to
> > the /var/run/openvswitch/ folder. I think we can solve this.
> >
> > Thanks
> > Numan
> >
> >> >> Can you give a more concrete example? I believe logrotate is
> >> running
> >> >> under root and should be able to rotate everything?
> >> >
> >> > The logrotate configuration for openvswitch logs has a 'su'
> >> directive
> >> > to run under the openvswitch user, precisely to prevent something
> >> > similar to the above. So it wont be able to rotate root owned logs
> >> in
> >> > the openvswitch directory. How it fails precisely depends on global
> >> > logrotate configuration. For example it will fail to create the new
> >> log
> >> > file after rotation if the 'create' directive is used. Or it may
> >> fail
> >> > to compress the rotated log file as it wont be able to read it.
> >> >
> >> > BR
> >> > Jaime
> >> >
> >> > -Original Message-
> >> > From: Ansis Atteka 
> >> > To: jcaam...@suse.de
> >> > Cc: ovs-discuss@openvswitch.org, Ben Pfaff 
> >> > Subject: Re: [ovs-discuss] Handling conf.db ownership on
> >> OVS_USER_ID
> >> > changes
> >> > Date: Wed, 24 Apr 2019 19:07:24 -0700
> >> >
> >> > On Tue, 23 Apr 2019 at 09:30, Jaime Caamaño Ruiz 
> >> > wrote:
> >> >>
> >> >> Hello
> >> >>
> >> >> So the non root owned log directory (and run directory) is shared
> >> >> between non root OVS processes and root OVN processes. Doesn't
> >> this
> >> >> raise some security concerns?
> >> >
> >> > As a "security concern" you mean something among the lines where
> >> one
> >> > of ovs-* processes running under openvswitch user would go ahead
> >> and
> >> > create a file with its owner that later one of ovn processes would
> >> > blindly reuse without checking that it actually belongs to
> >> root:root?
> >> >
> >> >
> >> >>
> >> >> Also, since logrotate rotates the logs with the OVS user, it may
> >> fail
> >> >> to some extent to rotate the root owned OVN log files.
> >> Consequences
> >> >> vary but in it's current state some logging might be lost. This
> >> could
> >> >> be improved but I would guess that the approprioate thing to do is
> >> to
> >> >> etiher use a different log directory for OVN or make its processes
> >> >> run
> >> >> with the OVS user also. Has any of this been considered?
> >> >
> >> > Can you give a more concrete example? I believe logrotate is
> >> running
> >> > under root and should be able to rotate everything?
> >> >
> >> >
> >> >>
> >> >> BR
> >> >> Jaime.
> >> >>
> >> >>
> >> >> -Original Message-
> >> >> From: Jaime Caamaño Ruiz 
> >> >> Reply-to: jcaam...@suse.com
> >> >> To: Ansis Atteka 
> >> >> Cc: ovs-discuss@openvswitch.org, Ben Pfaff 
> >> >> Subject: Re: [ovs-discuss] Handling conf.db ownership on
> >> OVS_USER_ID
> >> >> changes
> >> >> Date: Wed, 17 Apr 2019 12:52:32 +0200
> >> >>
> >> >> > You also need to chown /var/log/openvswitch.*.log files.
> >> >>
> >> >> OVS seems to be already handling this. I dont know the details bu

[ovs-discuss] [OVN] Incremental processing patches

2019-05-07 Thread Daniel Alvarez Sanchez
Hi folks,

After some conversations with Han (thanks for your time and great
talk!) at the Open Infrastructure Summit in Denver last week, here I
go with this - somehow crazy - idea.

Since DDlog approach for incremental processing is not going to happen
soon and Han's reported his patches to be working quite well and seem
to be production ready (please correct me if I'm wrong Han), would it
be possible to somehow enable those and then drop it after DDlog is in
a good shape?

Han keeps rebasing them [0] and I know we could use them but I think
that the whole OVN project would get better adoption if we could have
them in place in the main repo. The main downside is its complexity
but I wonder if we can live with it until DDlog becomes a reality.

Apologies in advance if this is just a terrible idea since I'm not
fully aware of what this exactly involves in terms of technical
complexity and feasibility. I believe that it'll make DDlog harder as
it'll have to deal with both approaches at the same time but looks
like the performance benefits are huge so worth to at least consider
it?

Thanks a lot!
Daniel

[0] https://github.com/hzhou8/ovs/tree/ip12_rebased_mar29
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] mapping between the upcall request in kernel and the upcall_handler thread in userspace

2019-05-07 Thread pei Jikui
When the datapath in kernel sends upcall request to vswitchd in the userspace, 
which upcall_handler thread in vswitch will deal with this upcall request?  
What's the criteria for dispatching a certain upcall to  a certain 
upcall_handler?


Much thanks


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-vswitchd 100% CPU usage after hard reboot

2019-05-07 Thread Jamon Camisso
Following up on this since it is still an issue.

strace on the thread in question shows  the following:

13:35:47 poll([{fd=23, events=POLLIN}], 1, 0) = 0 (Timeout) <0.18>
13:35:47 epoll_wait(42, [{EPOLLIN, {u32=3, u64=3}}], 9, 0) = 1 <0.18>
13:35:47 recvmsg(417, {msg_namelen=0}, MSG_DONTWAIT) = -1 EAGAIN
(Resource temporarily unavailable) <0.18>
13:35:47 poll([{fd=11, events=POLLIN}, {fd=42, events=POLLIN}, {fd=23,
events=POLLIN}], 3, 2147483647) = 1 ([{fd=42, revents=POLLIN}]) <0.19>
13:35:47 getrusage(RUSAGE_THREAD, {ru_utime={tv_sec=490842,
tv_usec=749026}, ru_stime={tv_sec=710657, tv_usec=442946}, ...}) = 0
<0.18>
13:35:47 poll([{fd=23, events=POLLIN}], 1, 0) = 0 (Timeout) <0.18>
13:35:47 epoll_wait(42, [{EPOLLIN, {u32=3, u64=3}}], 9, 0) = 1 <0.19>
13:35:47 recvmsg(417, {msg_namelen=0}, MSG_DONTWAIT) = -1 EAGAIN
(Resource temporarily unavailable) <0.18>
13:35:47 poll([{fd=11, events=POLLIN}, {fd=42, events=POLLIN}, {fd=23,
events=POLLIN}], 3, 2147483647) = 1 ([{fd=42, revents=POLLIN}]) <0.19>
13:35:47 getrusage(RUSAGE_THREAD, {ru_utime={tv_sec=490842,
tv_usec=749108}, ru_stime={tv_sec=710657, tv_usec=442946}, ...}) = 0
<0.17>


And if I strace with -c to collect a summary, after 4-5 seconds it shows
the following:

sudo strace -c -p 1658344
strace: Process 1658344 attached
^Cstrace: Process 1658344 detached
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
  0.000.00   0 9   write
  0.000.00   0 56397   poll
  0.000.00   0 28198 28198 recvmsg
  0.000.00   0 28199   getrusage
  0.000.00   0   14856 futex
  0.000.00   0 28199   epoll_wait
-- --- --- - - 
100.000.00141150 28254 total

I'm really at a loss here as to what's happening, has anyone seen
behaviour like this?

Cheers, Jamon
On 02/05/2019 23:12, Jamon Camisso wrote:
> I'm seeing an identical issue to the one posted here a few months ago:
> 
> https://mail.openvswitch.org/pipermail/ovs-discuss/2018-October/047558.html
> - I'll include the bug report template at the end.
> 
> The issue is an ovs-vswitchd thread consuming 100% CPU in a very lightly
> used Openstack Rocky cloud running on Bionic. Logs are filled with
> entries like this, about ~14000 per day:
> 
> 2019-05-01T18:34:30.110Z|237220|poll_loop(handler89)|INFO|Dropped
> 1092844 log messages in last 6 seconds (most recently, 0 seconds ago)
> due to excessive rate
> 
> 2019-05-01T18:34:30.110Z|237221|poll_loop(handler89)|INFO|wakeup due to
> [POLLIN] on fd 42 (unknown anon_inode:[eventpoll]) at
> ../lib/dpif-netlink.c:2786 (99% CPU usage)
> 
> ovs-vswitchd is running alongside various neutron processes
> (lbaasv2-agent, metadata-agent, l3-agent, dhcp-agent, openvswitch-agent)
> inside an LXC container on a physical host. There is a single neutron
> router, and the entire environment including br-tun, br-ex, and br-int
> traffic barely goes over 200KiB/s TX/RX combined.
> 
> If it is an issue with the Ubuntu packaged version (the other report is
> the same 2.10.0 package on Bionic which is suspicious), I've also filed
> a bug to track things here:
> https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1827264
> 
> Thanks for any feedback or troubleshooting steps anyone can provide.
> 
> Cheers, Jamon
> 
> 
> 
> 
> 
> Bug template:
> 
> What you did that make the problem appear.
>   - host server was hard rebooted. lxc containers came back up fine, but
> ovs-vswitchd thread is spinning CPU and has remained that way for 10
> days
> 
> What you expected to happen.
>   - negligible CPU usage since the cloud isn't in production
> 
> What actually happened.
>   - a single ovs-vswitchd thread is spinning at 100% CPU and logs are
> populated with thousands of messages claiming a million+ messages
> are dropped every 6 seconds
> 
> The Open vSwitch version number (as output by ovs-vswitchd --version).
>   - ovs-vswitchd (Open vSwitch) 2.10.0 (the Ubuntu packaged version)
> 
> The Git commit number (as output by git rev-parse HEAD)
>   - N/A
> 
> Any local patches or changes you have applied (if any).
>   - N/A
> 
> The kernel version on which Open vSwitch is running (from /proc/version)
>   - Linux version 4.15.0-47-generic (buildd@lgw01-amd64-001) (gcc
> version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #50-Ubuntu SMP Wed Mar 13
> 10:44:52 UTC 2019
> 
> The distribution and version number of your OS (e.g. “Centos 5.0”).
>   - Ubuntu 18.04.2 LTS
> 
> The contents of the vswitchd configuration database (usually
> /etc/openvswitch/conf.db).
>   - See attached conf.db.txt
> 
> The output of ovs-dpctl show.
>   - See below:
> 
> root@juju-df624b-4-lxd-10:~# ovs-dpctl show
> system@ovs-system:
>   lookups: hit:223561120 misse

Re: [ovs-discuss] OVS-DPDK giving lower throughput then Native OVS

2019-05-07 Thread Harsh Gondaliya
So is there any way to have TSO work with OVS-DPDK? Are there any patches
which can be applied? Because I followed this Intel page and the author was
able to get the 2.5x higher throughput for OVS-DPDK as compared to native
OVS.
https://software.intel.com/en-us/articles/set-up-open-vswitch-with-dpdk-on-ubuntu-server
In fact, this topic has been discussed quite a lot in the past and many
patches have been uploaded. Are these patches already applied to OVS 2.11
or do we need to apply them separately?

Being a student and a beginner with Linux itself, I do not know how all
these patches work and how do we apply them.

I think the reason of lower throughput in the scenario of OVS-DPDK is that
>> TSO(GSO)& GRO are not supported in OVS-DPDK. So the packets between the VMs
>> are limited to the MTU of the vhostuser ports.
>>
>
> And the kernel based OVS supports TSO(GSO)&GRO, the TCP packets can be up
> to 64KB, so the throughput of iperf between two VMs is much higher.
>
>
>
>
>
> 徐斌斌 xubinbin
>
>
> 软件开发工程师 Software Development Engineer
> 虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D
> Institute/Wireless Product Operation
>
>
>
> 南京市雨花台区花神大道6号中兴通讯
> 4/F, R&D Building, No.6 Huashen Road,
> Yuhuatai District, Nanjing, P.R. China,
> M: +86 13851437610
> E: xu.binb...@zte.com.cn
> www.zte.com.cn
> 原始邮件
> *发件人:*HarshGondaliya 
> *收件人:*ovs-discuss ;
> *日 期 :*2019年04月12日 15:34
> *主 题 :**[ovs-discuss] OVS-DPDK giving lower throughput then Native OVS*
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
> I had connected two VMs to native OVS bridge and I got iperf test result
> of around *35-37Gbps*.
> Now when I am performing similar tests with two VMs connected to OVS-DPDK
> bridge using vhostuser ports I am getting the iperf test results as around 
> *6-6.5
> Gbps.*
> I am unable to understand the reason for such low throughput in case of
> OVS-DPDK. I am using OVS version 2.11.0
>
> I have 4 physical cores on my CPU (i.e. 8 logical cores) and have 16 GB
> system. I have allocated 6GB for the hugepages pool. 2GB of it was given to
> OVS socket mem option and the remaining 4GB was given to Virtual machines
> for memory backing (2Gb per VM). These are some of the configurations of
> my OVS-DPDK bridge:
>
> root@dpdk-OptiPlex-5040:/home/dpdk# ovs-vswitchd
> unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
> 2019-04-12T07:01:00Z|1|ovs_numa|INFO|Discovered 8 CPU cores on NUMA
> node 0
> 2019-04-12T07:01:00Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 8 CPU
> cores
> 2019-04-12T07:01:00Z|3|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
> connecting...
> 2019-04-12T07:01:00Z|4|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
> connected
> 2019-04-12T07:01:00Z|5|dpdk|INFO|Using DPDK 18.11.0
> 2019-04-12T07:01:00Z|6|dpdk|INFO|DPDK Enabled - initializing...
> 2019-04-12T07:01:00Z|7|dpdk|INFO|No vhost-sock-dir provided -
> defaulting to /usr/local/var/run/openvswitch
> 2019-04-12T07:01:00Z|8|dpdk|INFO|IOMMU support for vhost-user-client
> disabled.
> 2019-04-12T07:01:00Z|9|dpdk|INFO|Per port memory for DPDK devices
> disabled.
> 2019-04-12T07:01:00Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xA
> --socket-mem 2048 --socket-limit 2048.
> 2019-04-12T07:01:00Z|00011|dpdk|INFO|EAL: Detected 8 lcore(s)
> 2019-04-12T07:01:00Z|00012|dpdk|INFO|EAL: Detected 1 NUMA nodes
> 2019-04-12T07:01:00Z|00013|dpdk|INFO|EAL: Multi-process socket
> /var/run/dpdk/rte/mp_socket
> 2019-04-12T07:01:00Z|00014|dpdk|INFO|EAL: Probing VFIO support...
> 2019-04-12T07:01:00Z|00015|dpdk|INFO|EAL: PCI device :00:1f.6 on NUMA
> socket -1
> 2019-04-12T07:01:00Z|00016|dpdk|WARN|EAL:   Invalid NUMA socket, default
> to 0
> 2019-04-12T07:01:00Z|00017|dpdk|INFO|EAL:   probe driver: 8086:15b8
> net_e1000_em
> 2019-04-12T07:01:00Z|00018|dpdk|INFO|DPDK Enabled - initialized
> 2019-04-12T07:01:00Z|00019|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports recirculation
> 2019-04-12T07:01:00Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN
> header stack length probed as 1
> 2019-04-12T07:01:00Z|00021|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS
> label stack length probed as 3
> 2019-04-12T07:01:00Z|00022|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports truncate action
> 2019-04-12T07:01:00Z|00023|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports unique flow ids
> 2019-04-12T07:01:00Z|00024|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports clone action
> 2019-04-12T07:01:00Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> sample nesting level probed as 10
> 2019-04-12T07:01:00Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports eventmask in conntrack action
> 2019-04-12T07:01:00Z|00027|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports ct_clear action
> 2019-04-12T07:01:00Z|00028|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> dp_has