Standard ether channel between the A client and A core/pe switch. But just a single link at the B end.
L3 connection between A and B clients over qinq. I know end-end ether channel works. It's this mismatch that is confusing me. Sent from my iPhone > On May 2, 2016, at 12:09 AM, Garrett Skjelstad <garr...@skjelstad.org> wrote: > > So you want to run some sort of link aggregation on top of a standard dot1q > frame type? > >> On May 1, 2016 18:01, "Wes Smith" <fath...@live.com> wrote: >> Hi >> I have two sites connected by l2 vlan trunk. >> On the A end, the A-client switch has multiple gig ports connecting to the >> "A" core/pe. >> >> On the B end, the "B-client" switch connects by a single 10g to the "B" >> core. >> >> All equipment is Cisco 6800 or 6509 sup720/Sup2t. >> >> The question is how to handle the multiple gig ports at the A-end. >> >> Preferably, I want a simple single layer 3 connection using some sort of >> ether-channel between the A client and A-core and connect to the single 10g >> at the B end. >> >> Instead of using independent l3 links for each of the A end gigE. >> >> A-client-L3Etherchannel --Acore--qinq--Bcore -- 10g-- BClient. >> >> Is this doable ? >> >> I know how to make multiple svi connections over this path. >> It's the lag config at the A end that is stumping me. >> _______________________________________________ >> cisco-nsp mailing list cisco-nsp@puck.nether.net >> https://puck.nether.net/mailman/listinfo/cisco-nsp >> archive at http://puck.nether.net/pipermail/cisco-nsp/ _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/