On Saturday, February 6, 2016, William McLendon <wimcl...@gmail.com> wrote:
> In this case I think they may have over-engineered the process, or there > are cases of concern i’m not aware of as to why they did it this way. I > have not ever configured Cisco vPC, and I understand it is fairly > complicated too, but Juniper’s MC-LAG config requirements seem way too > complicated. > > If you are using MC-LAG where the MC-LAG peers are running any routing > protocol over that VLAN, or L2-only with IRB interface for management, then > VRRP must be configured for proper ARP synchronization — one would think > that it would just forward the ARP packet over the ICL between the peers, > but that is not what happens. We got bit by this twice, once running OSPF > over an MC-LAG (MC-LAG also carried some L2-VLANs…don’t ask), and again > when we had MC-LAG pair of EX9200 connected to MC-LAG pair of QFX5100s — > QFX5100s were L2 only so only had IRB for management, and we could not > reach one of the members consistently. Workaround was to clear arp, which > seemed to cause them to re-sync / relearn, but once ARP timer timed out on > one of the QFX’s, it would never re-learn ARP for the VRRP gateway address > until ARP was cleared again. once we configured VRRP everything worked > correctly. Not the end of the world, but annoying to be sure. > > Since Juniper’s IPv6 documentation and config examples for MC-LAG are > basically non-existant, its still unclear to me as to official support, but > the config I posted previously where you define a VRRPv6 with static NDP > entries for both the link local and global unicast addresses of the peer’s > addresses seemed to work well in the bit of testing I was able to do in our > lab — basically mimicking all of the IPv4 requirements. I think the QFX > version was 14.1X53-D27, or something similar in that 14.1X53 family. > > Thanks Will. We'll try this. The one issue I see is that NS is sourced from the VRRP link local (and virtual MAC), not the IRB link local. ...karl > > Thanks, > > Will > > > > On Feb 5, 2016, at 12:00 PM, juniper-nsp-requ...@puck.nether.net > <javascript:;> wrote: > > > > Message: 4 > > Date: Fri, 5 Feb 2016 16:42:56 +0000 > > From: Phil Mayers <p.may...@imperial.ac.uk <javascript:;> <mailto: > p.may...@imperial.ac.uk <javascript:;>>> > > To: juniper-nsp@puck.nether.net <javascript:;> <mailto: > juniper-nsp@puck.nether.net <javascript:;>> > > Subject: Re: [j-nsp] QFX mc-lag and v6 ND > > Message-ID: <56b4d110.7060...@imperial.ac.uk <javascript:;> <mailto: > 56b4d110.7060...@imperial.ac.uk <javascript:;>>> > > Content-Type: text/plain; charset=utf-8; format=flowed > > > > On 05/02/16 14:40, Adam Vitkovsky wrote: > > > >> -that's the only occasion the internet where NDP and MC-LAG in listed > >> the same sentence, which is not a good sign on its own. But no > >> explanation about how it is done, especially the part about how the > >> ND Cache is maintained between the LAG members, which clearly is what > >> is not happening in your case. > > > > I must be missing something - why would a LAG of any type do any special > > processing of ND (or ARP, for that matter) traffic? All it has to do is > > forward the reply appropriately e.g. across the MC-LAG control link if > > the dest MAC is the peer switch or multi/broadcast. > > > > Obviously if you're doing some sort of active-active L3 forwarding on > > top of the MC-LAG then special things need to happen - but did OP say > that? > > > > Or is there some subtlety (dumb-lety?) about the way Juniper do this? > > > > > > ------------------------------ > > _______________________________________________ > juniper-nsp mailing list juniper-nsp@puck.nether.net <javascript:;> > https://puck.nether.net/mailman/listinfo/juniper-nsp _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp