Correct.

With the LT- method you'd have the same problem. You'd have to stitch every 
L2CKT to an LT-0/0/0.x then from LT-0/0/0.y (it's partner) to the L3VPN; so 
lots of config there too. Likewise, LT- interfaces are bandwidth-constrained 
(as they use PFE bandwidth to send the packet through the Trio "twice" so to 
speak; it's equivalent to using a hair-pin cable between to physical ports on 
the MX; hence could be congested). MPLS-LDP Mesh groups don't pass throughout 
twice (watch Doug H or ytti correct me here hah)

If you want to keep it simple config on the MX -- you're almost better off 
"book-ending" the L2circuit with another ACX sitting right next to your MX 
(i.e. do all the Martini stuff between ACX-in-the-field to 
ACX-at-the-head-end-POP), and hand off all the L2 circuits to the MX on a 
physical 10G port. Single place to provision the L3VPN interfaces then on the 
MX; use per-unit scheduler to enforce southbound QoS at the MX. All the L2 
stuff stays on the ACX, all the L3 stays on the MX.

Downside is you'll lose MX-termination redundancy now though and have an extra 
box at your head-end; which is one thing the ACX-to-L2-to-MX-VPLS solution 
solves. (the ACX you can terminate the same L2CKT on 2 different MXes PE's 
using backup-neighbour <alternate-PE-loopback> statement on the ACX at the 
pickup point "in the field"). 

Effectively this is just Hierarchal VPLS I'm proposing. Have POC'ed this before 
and worked well; but yes, 1 VPLS = 1 Tail + 1 IRB into the VRF

- CK.


On 20 Dec 2017, at 9:53 am, Pshem Kowalczyk <pshe...@gmail.com> wrote:

> Hi,
> 
> If I did it this way, wouldn't I need a separate VPLS for every access 
> circuit on the ACX? I'm trying to limit this to one 'infrastructure' L2 
> setup, with all of the provisioning on MX.
> 
> kind regards
> Pshem

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to