Saku,

Mark is correct, l2circuits are end-to-end services where uptime is based at 
the termination points,
Inside the backbone, every l2circuits works under LSP with FRR, so...  from on 
point to another, LSP can search the best route, put in place a second best 
route(standby), how many routes you want...

If we experience some degraded fiber/service, checking the LSP, we know where 
to look. And after disabling that segment of network, all problem solved.

Tshoot time decreased to few minutes only, and this makes everyone happy.

VPLS is very good when you use one port per customer, 1/10Gb, when you have to 
set trunk ports of 10Gb and put lots of vlan in a VC of 5to10 QFX switches,  
things gone wild...  one error, a l2 loop will shut every customer inside that 
distribution POD.

Think that layer2 still inside the vpls instance inside the MX routers and drop 
down to the VC. You can loop easily, yes, here, ops team did l2 loop two times 
in one year, dropping more than 500 customers. With a big impact.... after the 
second, the order was to move to
L2circuits, Mac learning and so one still resides at cpe equipment, and the 
core still clean.

I’m just sharing my operational experience and fears, quality of services 
offered... we provide last mile connect from carriers like Verizon, Orange, 
Algar, British Telecom and so on... so quality of service and availability is 
the rule of the business. Also we provide l2circuits of 10/40/100Gbps for ISP 
over here. So, quality and uptime is the focus.

ELS cli bring me some problem with QinQ L2TP services since it doesn’t works, 
also with the RTG, doesn’t work to, cause that I still using ex2200, ex3300.

About those Aristas, I don’t have notice if some one use them as a P router or 
if they have xconnect/l2circuits services, even local Arista dealer knows.... 
we have some Aristas working only with layer2/3 for colocation services inside 
our datacenters facilities.



att
Alexandre

Em 7 de jul de 2018, à(s) 09:38, Mark Tinka 
<mark.ti...@seacom.mu<mailto:mark.ti...@seacom.mu>> escreveu:



On 7/Jul/18 14:16, Saku Ytti wrote:


I feel your frustration, but to me it feels like there is very little
sharable knowledge in your experience.

Hmmh, I thought there was a fair bit, explicit and inferred. I feel Alexandre 
could have gone on more than his TL;DR post allowed for :-).



 You seem to compare full-blown
VPLS, with virtual switch and MAC learning to single LDD martini. You
also seem to blame BGP pseudowires for BGP transport flapping, clearly
you'd have equally bad time if LDP transport flaps, but at least in
BGP's case you can have redundancy via multiple RR connections, with
LDP you're reliant on the single session.

Sounded like BGP flaps was one of several problems Alexandre described, 
including an unruly BGP routing table, et al.

Not sure how relevant RR redundancy is per your argument, as ultimately, a 
single customer needing an end-to-end pw is mostly relying on the uptime of the 
PE devices at each end of their circuit, and liveliness of the core. If those 
pw's are linked by an LDP thread, what would a 2nd LDP-based pw (if that were 
sensibly possible) bring to the table?  I'm not dissing BGP-based pw signaling 
in any way or form, but for that, you'd need "Router + IGP + BGP + RR + RR" to 
be fine. With LDP-based signaling for just this one customer, you only need 
"Router + IGP + LDP" to be fine.

Personally, I've never deployed VPLS, nor had the appetite for it. It just 
seemed like a handful on paper the moment it was first published, not to 
mention the war stories around it from the brave souls that deployed it when 
VPLS was the buzzword then that SDN is these days. It certainly made the case 
for EVPN, which I still steer clear from until I find a requirement that can't 
be solved any other way.

Again, no dis to anyone running VPLS; just much respect to you for all your 
nerve :-).

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to