Re: [ovs-dev] [PATCH v2 ovn 0/3] Forwarding group to load balance l2 traffic with liveness detection

2020-01-16 Thread Numan Siddique
On Sat, Jan 11, 2020 at 6:11 AM Manoj Sharma 
wrote:

> A forwarding group is an aggregation of logical switch ports of a
> logical switch to load balance traffic across the ports. It also detects
> the liveness if the logical switch ports are realized as OVN tunnel ports
> on the physical topology.
>
> In the below logical topology diagram, the logical switch has two ports
> connected to chassis / external routers R1 and R2. The logical router needs
> to send traffic to an external network that is connected through R1 and R2.
>
> ++
>  +--+ R1 |*
> /   ++  ** **
>   +--++--+ / lsp1  * *
>   | Logical  ||   Logical|/   * External  *
>   | Router   ++   switch X*  Network  *
>   |  ||  |\   *   *
>   +--++--+ \ lsp2  * *
>  ^  \   ++  ** **
>  |   +--+ R2 |*
>  |  ++
>fwd_group -> (lsp1, lsp2)
>
> In the absence of forwarding group, the logical router will have unicast
> route to point to either R1 or R2. In case of R1 or R2 going down, it will
> require control plane's intervention to update the route to point to proper
> nexthop.
>
> With forwarding group, a virtual IP (VIP) and virtual MAC (VMAC) address
> are configured on the forwarding group. The logical router points to the
> forwarding group's VIP as the nexthop for hosts behind R1 and R2.
>
> [root@fwd-group]# ovn-nbctl fwd-group-add fwd ls1 VIP_1 VMAC_1 lsp1 lsp2
>
> [root@fwd-group]# ovn-nbctl fwd-group-list
> UUIDFWD_GROUP  VIPVMAC   CHILD_PORTS
> UUID_1fwd VIP_1  VMAC_1   lsp1 lsp2
>
> [root@fwd-group]# ovn-nbctl lr-route-list lr1
> IPv4 Routes
> external_host_prefix/prefix_lenVIP_1 dst-ip
>
> The logical switch will install an ARP responder rule to reply with VMAC
> as the MAC address for ARP requests for VIP. It will also install a MAC
> lookup rule for VMAC with action to load balance across the logical switch
> ports of the forwarding group.
>
> Datapath: "ls1" Pipeline: ingress
> table=10(ls_in_arp_rsp  ), priority=50   , match=(arp.tpa == VIP_1 &&
> arp.op == 1), action=(eth.dst = eth.src; eth.src = VMAC_1; arp.op = 2;
> /* ARP reply */ arp.tha = arp.sha; arp.sha = VMAC_1; arp.tpa = arp.spa;
> arp.spa = VIP; outport = inport; flags.loopback = 1; output;)
>
> table=13(ls_in_l2_lkup  ), priority=50   , match=(eth.dst == VMAC_1),
> action=(fwd_group("lsp1","lsp2");)
>
> In the physical topology, OVN managed hypervisors are connected to R1 and
> R2 through overlay tunnels. The logical flow's "fwd_group" action mentioned
> above, gets translated to openflow group type "select" with one bucket for
> each logical switch port.
>
> cookie=0x0, duration=16.869s, table=29, n_packets=4, n_bytes=392,
> idle_age=0,
> priority=111,metadata=0x9,dl_dst=VMAC_1 actions=group:1
>
> group_id=1,type=select,selection_method=dp_hash,
> bucket=actions=load:0x2->NXM_NX_REG15[0..15], resubmit(,32),
> bucket=actions=load:0x3->NXM_NX_REG15[0..15],resubmit(,32)
>
> where 0x2 and 0x3 are port tunnel keys of lsp1 and lsp2.
>
> The openflow group type "select" with selection method "dp_hash" load
> balances traffic based on source and destination Ethernet address, VLAN ID,
> Ethernet type, IPv4/v6 source and destination address and protocol, and for
> TCP and SCTP only, the source and destination ports.
>
> To detect path failure between OVN managed hypervisors and (R1, R2), BFD is
> enabled on the tunnel interfaces. The openflow group is modified to include
> watch_port for liveness detection of a port.
>
> group_id=1,type=select,selection_method=dp_hash,
>   bucket=watch_port:31,actions=load:0x2->NXM_NX_REG15[0..15],resubmit(,32),
>   bucket=watch_port:32,actions=load:0x3->NXM_NX_REG15[0..15],resubmit(,32)
>
> Where 31 and 32 are ovs port numbers for the tunnel interfaces connecting
> to R1 and R2.
>
> If the BFD forwarding status is down for any of the tunnels, the
> corresponding bucket will not be selected for packet forwarding.
>
>
Hi Manoj,

Thanks for the explanation in the previous message.

Regarding the commit message, I think I was not clear.

Can you please include the above description (with the ascii diagram) in
the commit
message of patch 2 of this series ?

I think you can drop patch 3 and instead include the test cases added to
ovn-nbctl.at into patch 1
and the rest in patch 2.


I have some comments about the series which I will reply separately.

Thanks
Numan


> Manoj Sharma (3):
>   Schema changes for adding forwarding_group 

[ovs-dev] [PATCH v2 ovn 0/3] Forwarding group to load balance l2 traffic with liveness detection

2020-01-10 Thread Manoj Sharma
A forwarding group is an aggregation of logical switch ports of a 
logical switch to load balance traffic across the ports. It also detects
the liveness if the logical switch ports are realized as OVN tunnel ports
on the physical topology.

In the below logical topology diagram, the logical switch has two ports
connected to chassis / external routers R1 and R2. The logical router needs
to send traffic to an external network that is connected through R1 and R2.

++
 +--+ R1 |*
/   ++  ** **
  +--++--+ / lsp1  * *
  | Logical  ||   Logical|/   * External  *
  | Router   ++   switch X*  Network  *
  |  ||  |\   *   *
  +--++--+ \ lsp2  * *
 ^  \   ++  ** **
 |   +--+ R2 |*
 |  ++
   fwd_group -> (lsp1, lsp2)

In the absence of forwarding group, the logical router will have unicast
route to point to either R1 or R2. In case of R1 or R2 going down, it will
require control plane's intervention to update the route to point to proper
nexthop.

With forwarding group, a virtual IP (VIP) and virtual MAC (VMAC) address
are configured on the forwarding group. The logical router points to the
forwarding group's VIP as the nexthop for hosts behind R1 and R2.

[root@fwd-group]# ovn-nbctl fwd-group-add fwd ls1 VIP_1 VMAC_1 lsp1 lsp2

[root@fwd-group]# ovn-nbctl fwd-group-list
UUIDFWD_GROUP  VIPVMAC   CHILD_PORTS
UUID_1fwd VIP_1  VMAC_1   lsp1 lsp2

[root@fwd-group]# ovn-nbctl lr-route-list lr1
IPv4 Routes
external_host_prefix/prefix_lenVIP_1 dst-ip

The logical switch will install an ARP responder rule to reply with VMAC
as the MAC address for ARP requests for VIP. It will also install a MAC
lookup rule for VMAC with action to load balance across the logical switch
ports of the forwarding group.

Datapath: "ls1" Pipeline: ingress
table=10(ls_in_arp_rsp  ), priority=50   , match=(arp.tpa == VIP_1 &&
arp.op == 1), action=(eth.dst = eth.src; eth.src = VMAC_1; arp.op = 2;
/* ARP reply */ arp.tha = arp.sha; arp.sha = VMAC_1; arp.tpa = arp.spa;
arp.spa = VIP; outport = inport; flags.loopback = 1; output;)

table=13(ls_in_l2_lkup  ), priority=50   , match=(eth.dst == VMAC_1),
action=(fwd_group("lsp1","lsp2");)

In the physical topology, OVN managed hypervisors are connected to R1 and
R2 through overlay tunnels. The logical flow's "fwd_group" action mentioned
above, gets translated to openflow group type "select" with one bucket for
each logical switch port.

cookie=0x0, duration=16.869s, table=29, n_packets=4, n_bytes=392, idle_age=0,
priority=111,metadata=0x9,dl_dst=VMAC_1 actions=group:1

group_id=1,type=select,selection_method=dp_hash,
bucket=actions=load:0x2->NXM_NX_REG15[0..15], resubmit(,32),
bucket=actions=load:0x3->NXM_NX_REG15[0..15],resubmit(,32)

where 0x2 and 0x3 are port tunnel keys of lsp1 and lsp2.

The openflow group type "select" with selection method "dp_hash" load
balances traffic based on source and destination Ethernet address, VLAN ID,
Ethernet type, IPv4/v6 source and destination address and protocol, and for
TCP and SCTP only, the source and destination ports.

To detect path failure between OVN managed hypervisors and (R1, R2), BFD is
enabled on the tunnel interfaces. The openflow group is modified to include
watch_port for liveness detection of a port.

group_id=1,type=select,selection_method=dp_hash,
  bucket=watch_port:31,actions=load:0x2->NXM_NX_REG15[0..15],resubmit(,32),
  bucket=watch_port:32,actions=load:0x3->NXM_NX_REG15[0..15],resubmit(,32)

Where 31 and 32 are ovs port numbers for the tunnel interfaces connecting
to R1 and R2.

If the BFD forwarding status is down for any of the tunnels, the
corresponding bucket will not be selected for packet forwarding.

Manoj Sharma (3):
  Schema changes for adding forwarding_group table and new ovn-nbctl commands
  Changes to add flow in the logical switch as well as in the ovn controller
  Unit tests for forwarding group

 controller/lflow.c|  20 
 controller/physical.c |  13 +++
 controller/physical.h |   4 +
 include/ovn/actions.h |  19 +++-
 lib/actions.c | 122 ++
 northd/ovn-northd.c   |  63 
 ovn-nb.ovsschema  |  18 +++-
 ovn-nb.xml|  35 +++
 tests/ovn-nbctl.at|  37 +++
 tests/ovn.at  | 124 +++
 utilities/ovn-nbctl.8.xml |  37 +++
 utilities/ovn-nbctl.c | 253 +++