I'm pretty sure tunnel-services is only required on M/T platforms. In those 
boxes you needed a tunnel PIC to allow VPLS to work. On MX series the 
functionality is built into the line cards. You do need to explicitly disable 
tunnel-services under 
"routing-instances routing-instance-name protocols vpls":
http://www.juniper.net/techpubs/en_US/junos12.2/topics/reference/configuration-statement/no-tunnel-services-edit-protocols-vpls.html

The command "set chassis fpc <XX> pic <XX> tunnel-services bandwidth <1g|10G>" 
lets you create LT interfaces that can be used to logically connect 2 Logical 
Systems or Virtual Routers together. Essentially, the MPC line cards in MX have 
built-in tunnel PICs. This lets you use them. 

https://www.juniper.net/techpubs/en_US/junos12.2/topics/example/logical-systems-connecting-lt.html

You shouldn't need this for VPLS.Hope this helps.

Serge






________________________________
 From: Mathias Sundman <math...@nilings.se>
To: Caillin Bathern <caill...@commtelns.com> 
Cc: juniper-nsp@puck.nether.net 
Sent: Friday, March 29, 2013 6:27:25 PM
Subject: Re: [j-nsp] MX5-T VPLS fowarding problem
 
On 03/29/2013 12:40 PM, Caillin Bathern wrote:
> First try adding "set chassis fpc <XX> pic <XX> tunnel-services
> bandwidth <1g|10G>".  You then have tunnel services on the MX80.  Can't
> remember if this has any caveats on being done to a live system though..

Can someone explain the benefits of using tunnel-services vs no-tunnel-services 
on the MX80 platform for VPLS services?

With a 10G backbone using 3-4 of the built-in 10G interfaces, and an expected 
VPLS use of 1-4G of bandwidth, what would be the recommended config?
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to