On Mon, Oct 3, 2016 at 3:16 PM, Han Zhou <zhou...@gmail.com> wrote:

>
>
> On Mon, Oct 3, 2016 at 2:21 PM, Darrell Ball <dlu...@gmail.com> wrote:
> >
> >
> >
> > On Mon, Oct 3, 2016 at 10:54 AM, Han Zhou <zhou...@gmail.com> wrote:
> >>
> >>
> >>
> >> On Sun, Oct 2, 2016 at 2:14 PM, Darrell Ball <dlu...@gmail.com> wrote:
> >> >
> >> >
> >> >
> >> > On Sun, Oct 2, 2016 at 11:27 AM, Han Zhou <zhou...@gmail.com> wrote:
> >> >>
> >> >> On Sat, Oct 1, 2016 at 4:34 PM, Darrell Ball <dlu...@gmail.com>
> wrote:
> >> >> >
> >> >> > Do not install any potential logical switch "router type"
> >> >> > port arp responders.  Logical router port arp responders
> >> >> > should be sufficient in this respect.
> >> >> > It seems a little wierd for a logical switch not proxying
> >> >> > for a remote VIF to be responding to arp requests and we
> >> >> > are not functionally using this capability in ovn.
> >> >> >
> >> >> Hi Darrell,
> >> >>
> >> >> The arp responder for patch port is useful e.g. when a VM pings the
> default gateway IP. Would removing the flow cause the arp request get
> flooded? And what's the benefit of removing it here?
> >> >
> >> >
> >> > 1) Modelling: I would expect the L3 gateway arp responder to be
> associated with the L3
> >> > gateway router datapath, at the very least. That way, the modeling is
> correct and we don't have a situation where, for example, a phantom gateway
> router is never even downloaded to a HV,
> >> > but is "responding" or rather appearing to respond to arp requests.
> >> >
> >>
> >> Ok, I see your concern. To achieve this expectation, it may be done in
> a way that is similar as the regular LS ports: reply ARP only if
> Logical_Switch_Port.up = true. When gateway router is bound to a chassis we
> can set the LS patch port up to true. And for distributed routers we can
> set patch port up directly. This way we can avoid responding ARP before
> gate router is bound.
> >
> >
> > I think you missed the main aspect.
> > There is a layering violation in doing this and also a modeling issue.
> > The key idea can be summarized as "A logical router should respond to
> arps
> > to itself" rather than some logical switch proxying that.
> > This has implications for cases where an IP address is shared by several
> gateways
> > and then the binding is used to designate the gateway used.
> >
>
> So is it for L3 GW HA? Are different MACs supposed to be used when the
> router is hosted on different gateways?
>

One approach is for backups gateways to take over IP ownership, but we are
probably getting
ahead of ourselves.

I think we can speed up the debate, for the present state of the code.

For the L3 gateway, there is a explicit mac binding entry being generated
in northd.
This is NOT the arp responder we discuss in this patch.
This is an explicit binding rule.

See the L3 gateway test.

 table=6 (lr_in_arp_resolve  ), priority=100  , match=(outport == "R1_join"
&& reg0 == 20.0.0.2), action=(eth.dst = 00:00:04:01:02:04; next;)


From ofproto tracing on HV1:
Rule: table=22 cookie=0 priority=100,reg0=0x14000002,reg15=0x2,metadata=0x1
OpenFlow actions=set_field:00:00:04:01:02:04->eth_dst,resubmit(,23)

The gets "helped along" by mac address sharing by transit logical switch
and gateway
patch ports.

# Connect R2 to join
ovn-nbctl lrp-add R2 R2_join 00:00:04:01:02:04 20.0.0.2/24
ovn-nbctl lsp-add join r2-join -- set Logical_Switch_Port r2-join \
    type=router options:router-port=R2_join addresses='"00:00:04:01:02:04"'


Let me know if you still think the logical switch "router type" arp
responder
(in this patch) is needed for L3 gateway support.




>
>
> >> >
> >> > 3) Usually, there are a limited number of L3 gateways and therefore
> associated bindings.
> >> > Also, for VMs participating in south<->north traffic, the bindings
> are less likely
> >> > to timeout since there are multiple uses of the L3 gateway for each
> VM.
> >> >
> >>
> >> With a big L2, even a small percent of VM doing ARP will cause annoying
> flooding. Moreover, considering containers come and go frequently this
> would be more common. So I think it is still better to suppress ARP for
> south-north if there is no real problem.
> >
> >
> > I don't buy it.
> >
> > Today, we skip using arp responders for packets arriving on localnet and
> vtep ports,
> > meaning the arp requests go to all VMs.
> > This would be a much more serious issue since external abuse is possible.
> >
> > This L3 gateway case is more limited and other approaches are possible
> to mitigate
> > this.
> > We discussed this internally and we are otherwise thinking to have a
> user visible
> > configuration for arp responders in general.
> >
> > If we really cannot tolerate a few containers coming and going then we
> have a serious
> > problem that already exists for localnet and vtep cases as well as pure
> L2 forwarding
> > decisions.
> >
>
> For localnet it is supposed to be used for physical network only, where L2
> size is not very big usually. For vtep, it is a problem, and it is a
> disadvantage of using vtep. I think it would be good to avoid that same
> problem for ARPs from VIFs if possible.
>
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to