[ovs-discuss] watch_group liveness check for OVS groups

2021-01-21 Thread Alexander Constantinescu
Hi,

TL;DR: I am wondering if there's any specific action/convention that needs
to be defined for groups which are referenced by a watch_group, as to have
the liveness check correctly working? FYI: I can't use a liveness check on
a dedicated OVS port in this case.

I have not been able to find much documentation surrounding watch_group,
the only doc I've been basing myself on is:
http://www.openvswitch.org/support/dist-docs/ovs-ofctl.8.html

I am working on a POC where the goal of my work is to load balance packets
between two nexthops for packets matching a given flow. Essentially, I have
this flow which gets hit:

 cookie=0x0, duration=12657.001s, table=100, n_packets=1423,
n_bytes=113190, priority=100,ip,reg0=0x5d41f9 actions=group:2

for which I have the following groups defined:

 
group_id=3,type=select,bucket=weight:100,actions=ct(commit),move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.19.0.2->tun_dst,output:vxlan0

group_id=2,type=select,bucket=weight:100,watch_group:3,actions=group:3,bucket=weight:100,watch_group:4,actions=group:4
 
group_id=4,type=select,bucket=weight:100,actions=ct(commit),move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.19.0.4->tun_dst,output:vxlan0

Group 2 thus load balances between group 3 (which forwards packets to
nexthop 172.19.0.2) and 4 (corresponding to nexthop 172.19.0.4) in an equal
way.

The load balancing works, however the watch_group does not seem to have any
impact, and what I mean by that is: if I shutdown the nodes corresponding
to either of my nexthops, group 2 will still try to send packets to the
nexthop (node) which I've just shut down.

Honestly, I don't expect OVS to be able to determine the liveness of my
group based off of what I've writtenbut I don't have any better idea of
what to do. I am unable to tell if there's a specific action / convention
that needs to be defined for groups which are referenced by a watch_group.

Thanks in advance for any help!
-- 

Best regards,


Alexander Constantinescu

Software Engineer, Openshift SDN

Red Hat <https://www.redhat.com/>

acons...@redhat.com
<https://www.redhat.com/>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [OVN]: IP address representing external destinations

2020-09-18 Thread Alexander Constantinescu
Hi Han

Sorry for the late reply.

Is this the current situation?
>

Yes, it is.

When you say there are too many default routes, what do you mean in the
> above example? How would the SOUTH_TO_NORTH_IP solve the problem?
>

Each  corresponds to a node in our cluster, like this:

  ip4.src ==  && ip4.dst ==
, allow
  ip6.src ==  && ip6.dst ==
, allow
  ip4.src ==  && ip4.dst ==
, allow
...
  ip6.src ==  && ip6.dst ==
, allow

so on large clusters (say 1000 nodes) with IPv6 and IPv4 enabled we can
reach ~2000 logical router policies. By having the SOUTH_TO_NORTH_IP we can
completely remove all of them and have the "default route" logical router
policy specify:

default route (lowest priority): ip4.src == 
ip4.dst == SOUTH_TO_NORTH_IP, nexthop = 

In addition, if SOUTH_TO_NORTH_IP is a user defined IP,
>

I didn't think it should be user defined, more so "system defined", like
0.0.0.0/0

 I am not sure how would it work,  because ip4.dst is the dst IP from
> packet header
>

I didn't intend for such an IP to be used solely as a destination IP, but
as source too, if the user requires it.

Comparing it with SOUTH_TO_NORTH_IP would just result in mismatch, unless
> all south-to-north traffic really has this IP as destination (I guess
> that's not the case).
>

Sure, I just wanted to assess the feasibility of such an IP from OVN's
point of view. Obviously the real destination IP would be different, but I
(without knowing the underlying works of OVN) thought there might be a
programmable way of saying: "this IP is unknown to my network topology, so
I could use identifier/alias grouping all such IPs under an umbrella
identifier such as X.X.X.X/X"


On Wed, Sep 16, 2020 at 11:09 PM Han Zhou  wrote:

>
>
> On Wed, Sep 16, 2020 at 10:07 AM Alexander Constantinescu <
> acons...@redhat.com> wrote:
>
>> In this example it is equivalent to just "ip4.src == 10.244.2.5/32"'.
>>>
>>
>> Yes, I was just using it as an example (though, granted, noop example)
>>
>> Some background to help steer the discussion:
>>
>> Essentially the functionality here is to have south -> north traffic from
>> certain logical switch ports exit the cluster through a dedicated node (an
>> egress node if you will). To do this we currently have a set of default
>> logical router policies, intended to leave east <-> west traffic untouched,
>> and then logical router policies with a lower priority, which specify
>> reroute actions for this functionality to happen. However, on large
>> clusters, there's this concern that the default logical router policies
>> will become too many. Hence why the idea here would be to drop them
>> completely and have this "special IP" that we can use to filter on the
>> destination, south -> north, traffic .
>>
>> If you have a default route, anything "unknown" would just hit the
>>> default route, right? Why would you need another IP for this purpose?
>>>
>>
>> As to remove the default logical router policies, which can become a
>> lot, on big clusters - as described above. With only reroute policies of
>> type: "ip4.src == 10.244.2.5/32 && ip4.dst == SOUTH_TO_NORTH_IP" things
>> would become lighter.
>>
>
> Thanks for the background. So you have:
> 
> 
> ...
> default route (lowest priority): ip4.src == ,
> nexthop = 
> default route (lowest priority): ip4.src == ,
> nexthop = 
>
> Is this the current situation?
> When you say there are too many default routes, what do you mean in the
> above example? How would the SOUTH_TO_NORTH_IP solve the problem?
>
> In addition, if SOUTH_TO_NORTH_IP is a user defined IP, I am not sure how
> would it work, because ip4.dst is the dst IP from packet header. Comparing
> it with SOUTH_TO_NORTH_IP would just result in mismatch, unless all
> south-to-north traffic really has this IP as destination (I guess that's
> not the case).
>
>
>>  In policies/ACL you will need to make sure the priorities are set
>>> properly to achieve the default-route behavior.
>>>
>>
>> Yes, so this is currently done, as described above.
>>
>> On Wed, Sep 16, 2020 at 6:35 PM Han Zhou  wrote:
>>
>>>
>>>
>>> On Wed, Sep 16, 2020 at 5:42 AM Alexander Constantinescu <
>>> acons...@redhat.com> wrote:
>>> >
>>> > Hi
>>> >
>>> > I was wondering if anybody is aware of an IP address signifying
>>> "external IP destinations"?
>>> >
>>> > Currently in OVN we can use the IP address 0.0.0.0/0 for match
>>> expressions in logical r

Re: [ovs-discuss] [OVN]: IP address representing external destinations

2020-09-16 Thread Alexander Constantinescu
>
> In this example it is equivalent to just "ip4.src == 10.244.2.5/32"'.
>

Yes, I was just using it as an example (though, granted, noop example)

Some background to help steer the discussion:

Essentially the functionality here is to have south -> north traffic from
certain logical switch ports exit the cluster through a dedicated node (an
egress node if you will). To do this we currently have a set of default
logical router policies, intended to leave east <-> west traffic untouched,
and then logical router policies with a lower priority, which specify
reroute actions for this functionality to happen. However, on large
clusters, there's this concern that the default logical router policies
will become too many. Hence why the idea here would be to drop them
completely and have this "special IP" that we can use to filter on the
destination, south -> north, traffic .

If you have a default route, anything "unknown" would just hit the default
> route, right? Why would you need another IP for this purpose?
>

As to remove the default logical router policies, which can become a
lot, on big clusters - as described above. With only reroute policies of
type: "ip4.src == 10.244.2.5/32 && ip4.dst == SOUTH_TO_NORTH_IP" things
would become lighter.

 In policies/ACL you will need to make sure the priorities are set properly
> to achieve the default-route behavior.
>

Yes, so this is currently done, as described above.

On Wed, Sep 16, 2020 at 6:35 PM Han Zhou  wrote:

>
>
> On Wed, Sep 16, 2020 at 5:42 AM Alexander Constantinescu <
> acons...@redhat.com> wrote:
> >
> > Hi
> >
> > I was wondering if anybody is aware of an IP address signifying
> "external IP destinations"?
> >
> > Currently in OVN we can use the IP address 0.0.0.0/0 for match
> expressions in logical routing policies / ACLs when we want to specify a
> source or destination IP equating to the pseudo term: "all IP
> addresses",ex: 'match="ip4.src == 10.244.2.5/32 && ip4.dst ==0.0.0.0/0"'
> >
> In this example it is equivalent to just "ip4.src == 10.244.2.5/32"'.
>
> > Essentially what I would need to do for an OVN-Kubernetes feature is
> specify such a match condition for south -> north traffic, i.e when the
> destination IP address is external to the cluster, and most likely
> "unknown" to OVN. Thus, when OVN does not know how to route it within the
> OVN network topology and has no choice except sending it out the default
> route.
> >
> > Do we have such an IP address in OVN/OVS? Would it be feasible to
> introduce, in case there is none?
> >
> We don't have such a special IP except 0.0.0.0/0. If you have a default
> route, anything "unknown" would just hit the default route, right? Why
> would you need another IP for this purpose? In logical_router_static_route
> the priority is based on prefix length. In policies/ACL you will need to
> make sure the priorities are set properly to achieve the default-route
> behavior.
>
> Thanks,
> Han
>
> > Thanks in advance!
> >
> > --
> >
> > Best regards,
> >
> >
> > Alexander Constantinescu
> >
> > Software Engineer, Openshift SDN
> >
> > Red Hat
> >
> > acons...@redhat.com
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>


-- 

Best regards,


Alexander Constantinescu

Software Engineer, Openshift SDN

Red Hat <https://www.redhat.com/>

acons...@redhat.com
<https://www.redhat.com/>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [OVN]: IP address representing external destinations

2020-09-16 Thread Alexander Constantinescu
Hi

I was wondering if anybody is aware of an IP address signifying "external
IP destinations"?

Currently in OVN we can use the IP address 0.0.0.0/0 for match expressions
in logical routing policies / ACLs when we want to specify a source or
destination IP equating to the pseudo term: "all IP addresses",ex:
'match="ip4.src == 10.244.2.5/32 && ip4.dst ==0.0.0.0/0"'

Essentially what I would need to do for an OVN-Kubernetes feature is
specify such a match condition for south -> north traffic, i.e when the
destination IP address is external to the cluster, and most likely
"unknown" to OVN. Thus, when OVN does not know how to route it within the
OVN network topology and has no choice except sending it out the default
route.

Do we have such an IP address in OVN/OVS? Would it be feasible to
introduce, in case there is none?

Thanks in advance!

-- 

Best regards,


Alexander Constantinescu

Software Engineer, Openshift SDN

Red Hat <https://www.redhat.com/>

acons...@redhat.com
<https://www.redhat.com/>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [OVN]: Routing external traffic through a specific gateway

2020-04-23 Thread Alexander Constantinescu
Hi all,

I am posting this here as I delved through the ovn-nb documentation for a
couple of days without any bright ideas popping up.

I am trying to have north -> south and south -> north traffic for a
logical_switch_port traverse a specific router gateway of my choosing. This
means: I don't want external traffic in any direction for this
logical_switch_port to pass through its pre-defined distributed gateway
router.

Some background: the logical_switch_port that I would like this to be done
for, is of type:"" (meaning it's connected to "A VM (or VIF) interface")
and is connected to a distributed gateway router local to the node the
logical_switch_port is hosted on. I have changed the type to "router" and
defined "options: router-port=...", and this is where I lose insight as to:

- How to configure the router-port correctly so that external traffic for
the logical_switch_port gets passed to it and properly routed afterwards
- If this is even the right approach?
- Is this possible to do only for north -> south / south -> north traffic.
And thus keep east -> west and vice versa intact?

Any ideas are welcomed.

Thanks in advance!
/Alexander
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Questions: OVS-DB running raft mode

2020-01-29 Thread Alexander Constantinescu
Hi

My name is Alexander Constantinescu. I am working for Red Hat on
implementing OVN in a cloud environment (effectively working on
ovn-org/ovn-kubernetes) with an OVS database running in clustered mode.

We have a couple of questions concerning the expected behaviour of the DB
raft cluster w.r.t the amount of members during creation/runtime.

Background: the DB cluster will be deployed across X master nodes - which
is user defined (i.e the user can specify how much X is). Raft
consensus requires at least (n/2)+1 nodes.

   - What is expected from the OVS DB if a user decides to create only 1
   master node, for example? We are noticing that it deploys just fine
   (without any indication of issues in the logs), but is it *really *fine?
   Is it even running in cluster mode? Can we expect transactions to the DB to
   work fine? According to the consensus formula above it would just mean that
   the cluster cannot lose any member, but I would just like to confirm that
   my understanding is correct and aligned with the implementation.
   - What is the expected behaviour if a raft cluster is created with an
   amount of master nodes satisfying the requirement of raft consensus, but
   some nodes disappear during its lifecycle and this condition ceases to
   hold? We have noticed that the northbound/southbound databases ports close,
   is this a correct deduction according to the clustered implementation?

Thanks in advance for any answer(s)!

Best regards,


Alexander Constantinescu

Software Engineer, Openshift SDN

Red Hat <https://www.redhat.com/>

acons...@redhat.com
<https://www.redhat.com/>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss