Re: [j-nsp] MX5-T VPLS fowarding problem

2013-03-31 Thread Serge Vautour
I'm pretty sure tunnel-services is only required on M/T platforms. In those 
boxes you needed a tunnel PIC to allow VPLS to work. On MX series the 
functionality is built into the line cards. You do need to explicitly disable 
tunnel-services under 
routing-instances routing-instance-name protocols vpls:
http://www.juniper.net/techpubs/en_US/junos12.2/topics/reference/configuration-statement/no-tunnel-services-edit-protocols-vpls.html

The command set chassis fpc XX pic XX tunnel-services bandwidth 1g|10G 
lets you create LT interfaces that can be used to logically connect 2 Logical 
Systems or Virtual Routers together. Essentially, the MPC line cards in MX have 
built-in tunnel PICs. This lets you use them. 

https://www.juniper.net/techpubs/en_US/junos12.2/topics/example/logical-systems-connecting-lt.html

You shouldn't need this for VPLS.Hope this helps.

Serge







 From: Mathias Sundman math...@nilings.se
To: Caillin Bathern caill...@commtelns.com 
Cc: juniper-nsp@puck.nether.net 
Sent: Friday, March 29, 2013 6:27:25 PM
Subject: Re: [j-nsp] MX5-T VPLS fowarding problem
 
On 03/29/2013 12:40 PM, Caillin Bathern wrote:
 First try adding set chassis fpc XX pic XX tunnel-services
 bandwidth 1g|10G.  You then have tunnel services on the MX80.  Can't
 remember if this has any caveats on being done to a live system though..

Can someone explain the benefits of using tunnel-services vs no-tunnel-services 
on the MX80 platform for VPLS services?

With a 10G backbone using 3-4 of the built-in 10G interfaces, and an expected 
VPLS use of 1-4G of bandwidth, what would be the recommended config?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP PIC Edge on MX platforms

2013-03-31 Thread Mark Tinka
On Tuesday, January 01, 2013 02:58:54 PM Phil Bedard wrote:

 I agree it is not quite the same. The hierarchical FIB
 and next-hop indirection so the number of prefixes is
 independent of the reconvergence time has been in Junos
 for a long time. That is like the PIC part.

http://www.juniper.net/techpubs/software/junos/junos94/swconfig-
routing/enabling-an-indirect-next-hop.html

Yes, the PIC part has been in Junos for a while now. 

It's was buggy in Junos 9, but I've had no major dramas even 
in the worst instances of Junos 10 and later.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Juniper equivalent to Cisco 3800X

2013-03-31 Thread Mark Tinka
On Wednesday, January 02, 2013 04:52:02 PM Eric Van Tol 
wrote:

 Unfortunately, I do not.  There may be other differences
 between the two which is why they were pushing the
 ME3800X, such as buffer space or other QoS related
 differences.  Perhaps it was a positioning issue, but if
 that's the case, why not the ME3600X?  Given just the
 birds eye view of what you need, it sounds like the
 ME3600X, ME3800X, EX3200, and EX4200 would all work and
 it really depends on the more specific features and
 functions you would need, as well as your budget.

You can't really compare the ME3600X/3800X to the Juniper 
EX3200/4200 switches.

The Cisco options are more routers than switches (even more 
routers than switches when compared to other Cisco switches, 
including the older generation ME switches and the newer 
generation desktop switches from Cisco).

What this means is:

- You'll get better IPv6 support.
- You'll get better filtering support.
- You'll get router-like QoS support.
- You'll get EVC support.
- You'll get full MPLS support (including TE).
- You'll get MEF-type support for Ethernet.
- You won't lose typical Layer 2 features.

The main difference between the ME3600X and ME3800X is 
scale. 

The ME3800X is positioned as an aggregation device, while 
the ME600X as an access device. All this means is that the 
number of ACL, IPv4, IPv6, Multicast, MAC address, bridge 
domain, e.t.c., entries will be slightly higher on the 
ME3800X than the ME3600X. Apart from that, all other 
features are the same.

I feel the ME3800X would be useful as an aggregation device 
if you're looking at high volume, low cost aggregation 
solutions for situations like LTE (where the MX and ASR9000 
are great, but just too big/costly). Otherwise, for the 
majority of cases, the ME3600X is fine with me, since for 
me, the platform is the ideal solution for MPLS in the 
access, and getting rid of STP and all that nonesense once 
and for all, when building large scale Metro-E access 
networks.

Hope this helps.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Question about Routing Engine Redundancy on MX

2013-03-31 Thread Mark Tinka
On Wednesday, January 09, 2013 11:23:44 PM Richard A 
Steenbergen wrote:

 Plus, I don't think I've actually had NSR work correctly
 in about 4-5 years now. There are hard-coded time-outs
 during the NSR sync process after the backup RE reboots,
 and if your network is big enough that you carry some
 decent number of BGP paths it will take so long to sync
 that this will time out and fail the entire process. I
 once had a case open about this issue, but after about
 1.5 years of being unable to explain it to the idiot in
 JTAC I just gave up. I checked several years later, and
 it was still broken in exactly the same way, so I'm
 going to guess that no other large network dares to run
 NSR either. :)

The concept is great, but I've never quite bought into it 
after all these years, complaints on fora about it not 
working notwithstanding.

Same with ISSU - in most cases, I'm probably going to run 
upgrades in a maintenance window anyway, so ISSU really buys 
me nothing but pain (not to mention how long it takes to 
upgrade certain platforms, e..g, those based on IOS XR).

The only thing I've really deployed which has helped (even 
on single control plane systems) is IETF Graceful Restart. 
That's about as kinky as I'm willing to get.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] EX Switch Question

2013-03-31 Thread Mark Tinka
On Friday, January 11, 2013 01:41:47 PM Pavel Lunin wrote:

 Looks like Juniper just did not much care metro ethernet.
 BTW, it's sometimes said, that the reason is a lack of
 such a market in North America (where I've even never
 been to, thus can't judge whether this sentence is
 correct :). Another reason (more serious, I would say)
 is a strong Juniper's position in the MPLS battle, which
 doesn't allow them to invest much into the plain
 ethernet metro access technologies to not kill the goose
 that lays the golden eggs (from the pure engineering
 point of view there is also some logics in such an
 approach).

Well, Cisco must be seeing some market that Juniper aren't, 
considering these are both American companies.

Needless to say, Cisco have a healthy support for both MPLS-
based and MEF-/Carrier Ethernet-based Metro-E solutions in 
their latest platforms targeted at this space.

 With the launch of ACX series things might change and a
 reasonably cheap JUNOS-based metro ethernet-over-MPLS
 solution might become available, but this is a deal of a
 few quarters ahead, as I understand.

When the MX80 was still codenamed Taz, and all we got to 
test was a board with no chassis, I asked Juniper to rethink 
their idea about a 1U solution for access in the Metro, 
especially because Brocade had already pushed out the 
CER/CES2000, and Cisco were in the final stages of 
commissioning the ME3600X/3800X, at the time.

Epic fail on Juniper's part to think that networks will 
still go for too big boxes for small box deployments. 
The ERBU head promised that they were looking at a 1U MX80 
box that would rival the Cisco and Brocade options in the 
access, but I think they thought coming up with the MX5, 
MX10 and MX40 were far better ideas :-\.

I think Juniper will get with the program, but the damage 
will have long been done (especially, as we have now come to 
expect, you can't just roll-out current code on new boxes - 
it takes a while to break them in even with basic parity).

It's for the same reason I can't fathom why they still have 
no answer to Cisco's ASR1000. Even just for route 
reflection, I'd be very hard-pressed to choose a US$1 MX480 
with a 16GB RE over a Cisco ASR1001 costing ten thousand 
times the price.

C'mon, Juniper.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX5-T VPLS fowarding problem

2013-03-31 Thread Sergey Khalavchuk
Hello,

On Fri, Mar 29, 2013 at 11:27 PM, Mathias Sundman math...@nilings.se wrote:
 On 03/29/2013 12:40 PM, Caillin Bathern wrote:

 Can someone explain the benefits of using tunnel-services vs
 no-tunnel-services on the MX80 platform for VPLS services?

tunnel-services are just a hack to allow two lookups per one frame
received from mpls backbone using frame re-circulation:
1) label lookup to map frame to vpls instance,
2) mac lookup to forward frame to correct interface.

Recirculation means that every frame must be processed TWICE. And it
is possible that frame will cross fabric twice.
For this reason, when you enable tunnel-services for fpc, you must put
bandwidth limit (1g or 10g), and recirculated traffic will be policed.

On older fpc it is mandatory for VPLS.
On newer fpcs, it is possible to perform 1st lookup with IO manager on
ingress linecard, and avoid recirculation.

So, with vrf-table-label or with no-tunnel-services, there is single
lookup, no performance penalty, only profit.

Why tunnel-services may be needed?
1) older fpc (not mx).
2) SDH/ATM/local tunnels on ingress line-card doesn't allow IO manager
to perform 1st lookup, so tunnel services will be required.

 With a 10G backbone using 3-4 of the built-in 10G interfaces, and an
 expected VPLS use of 1-4G of bandwidth, what would be the recommended
 config?

use vrf-table-label of no-tunnel-services.

--
wbr,
Sergey Khalavchuk
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Confusion about DSCP marking and rewrite rules

2013-03-31 Thread Mark Tinka
On Monday, January 14, 2013 10:55:11 AM Per Granath wrote:

 On egress you may apply a rewrite rule to a class (on an
 interface). Essentially, this means you cannot rewrite
 on ingress.

On IQ2 and IQ2E PIC's, you can remark on ingress with the 
ToS Translation Tables feature.

On the MX running Trio line cards, you can remark on ingress 
with an ingress firewall filter, but sadly, this doesn't 
support EXP or IPv6 DSCP bits.

I like the ToS Translation Tables, but it's limited to 
platforms that run those line cards (which is not to say 
that an IQ2 or IQ2E PIC in an MX-FPC will work - never 
tried).

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Redundancy with MX

2013-03-31 Thread Mark Tinka
On Friday, January 25, 2013 09:46:28 AM Saku Ytti wrote:

 Still I'd be afraid that the added complexity bites me
 more often than saves me.

I have to agree - I'd rather buy another chassis to solve my 
problems than implement logical systems (SDR in the Cisco 
world).

I'm also still waiting for some justification as to why this 
would be useful beyond the obvious (I know many folk that 
like this type of thing). Looking at all the restrictions 
involved, what may or may not work, what may or may not 
break, for me, it's like ISSU - nice on paper but not very 
practical.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Redundancy with MX

2013-03-31 Thread Mark Tinka
On Friday, February 01, 2013 10:28:13 PM Eugeniu Patrascu 
wrote:

 I would go with the two MX80s and two L2 switches to
 aggregate all connections.
 
 I did a design like this with 2 x MX80 and 2 x EX4500 in
 a stack (only L2 aggregation, routing done on the
 MX).The switches would be connected to the MX80s by 10G
 ports (2 for IN, 2 for  OUT - in each MX80) - connected
 in a MC-LAG to the EX4500 stack. Redundancy all the way
 :)
 Yes, you would have to play with the routing protocols to
 balance traffic at some point if you saturate one of
 your links in the MX, but that would only happen if you
 want to do more than 20G one way.

The above is a reasonable design.

We structure ours based on small, medium or large edge 
sites, where it doesn't yet make sense to launch a full PoP 
with core routers, route reflectors, e.t.c.

This would typically be a new PoP where we're going in on 
the back of a handful of customers, but not big enough yet 
to use as a transit PoP for major traffic flow in the global 
scheme of things.

My main issue with the MX80 is low 10Gbps density, if the 
box is a collapsed P/PE device, handling both customers as 
well as core traffic. In such cases, provided there is 
space, the MX480 has been considered from the Juniper side. 
Cisco ends up providing more flexibility with their ASR1006 
platform, which can then later be scaled to an ASR9000 for a 
large edge PoP, pre-full PoP design.

It really depends on:

- How small or large you expect your PoP to be.
- How much you plan to make in sales.
- How many core links come in.
- How fast those core links will be.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] EX Switch Question

2013-03-31 Thread Mark Tinka
On Sunday, March 31, 2013 05:28:02 PM Jeff Wheeler wrote:

 Just problem #1 clearly demonstrates that
 upper-management has no idea what they are doing.  They
 are managing their inventory like they're making
 automobiles with razor-thin margins, not I.T. products
 which sell for many multiples of the manufacturing and
 logistics cost.  Not only that, but they have no
 systemic way for customers to expedite orders (and
 generate revenue / additional margin from such
 expedites), which is just leaving money on the table.

I suffered this exact issue last December.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Junos 12.3 Release Date

2013-03-31 Thread Mark Tinka
On Wednesday, January 23, 2013 01:41:44 AM Caillin Bathern 
wrote:

 The X44-D10 release is Junos 12.2 for SRX/J Series
 platforms, eg flow-mode code.  Flow mode is branching
 away from real Junos as 12.1X44-D10, D15, D20...etc
 until 13.2 or 13.3 when they will consolidate again so
 that development of flow-mode code can be sped up.

When Juniper killed Junos on the J-series by making it 
JUNOS-ES (9.4, if my brain cells are still intact), they 
lost me on that side of the wall.

That would have been a decent route reflector if it had been 
developed as such. Juniper need to stop banking on customers 
buying M120's or MX960's just for route reflection. 
Obviously, it would help if service providers stopped doing 
that also - believe me, I know a few.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] EX Switch Question

2013-03-31 Thread Jeff Wheeler
On Sun, Mar 31, 2013 at 10:18 AM, Mark Tinka mark.ti...@seacom.mu wrote:
 no answer to Cisco's ASR1000. Even just for route
 reflection, I'd be very hard-pressed to choose a US$1 MX480
 with a 16GB RE over a Cisco ASR1001 costing ten thousand
 times the price.

These are all symptoms of Juniper's incompetent management.

* inability to manage supply chain  logistics, leading to 6 month
lead times on incredibly high-margin products
* essentially no software Q/A
* consistently failing to deliver on software commitments / roadmap
* big gaps in the product line that could be fixed easily, but aren't
* far worse TAC than competitors
* refusal to admit basic, obvious, and huge bugs exist so they can be fixed
* widespread ineptitude in the sales and sales-support force
* misplaced RD efforts
* basic features that customers require missing from products *by
choice* while competitors have those features

Just problem #1 clearly demonstrates that upper-management has no idea
what they are doing.  They are managing their inventory like they're
making automobiles with razor-thin margins, not I.T. products which
sell for many multiples of the manufacturing and logistics cost.  Not
only that, but they have no systemic way for customers to expedite
orders (and generate revenue / additional margin from such expedites),
which is just leaving money on the table.

They cannot even solve the basic, easily-fixed problems with their
business.  I have little faith in Juniper's long-term future.

--
Jeff S Wheeler j...@inconcepts.biz
Sr Network Operator  /  Innovative Network Concepts
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] 1000Base-T SFP shows Link Up without cable inserted

2013-03-31 Thread Mathias Sundman
I've just upgraded two of my MX5-T boxes to 11.4R7.5 and after that my 
3rd party 1000Base-T SFP (Transmode originals based on Finisar) started 
to show Link Up as soon as the SFP is inserted (no cable inserted).


On 11.2R5.4 it worked as it should. 11.4R1.14 that the boxes came with 
was also broken.


Bug or feature? Has anyone used any Juniper original 1000Base-T SFPs on 
the MX-5/10/40/80-T platform with JunOS 11.4?


masun@solir1 show version
Hostname: solir1
Model: mx5-t
JUNOS Base OS boot [11.4R7.5]
JUNOS Base OS Software Suite [11.4R7.5]
JUNOS Kernel Software Suite [11.4R7.5]
JUNOS Crypto Software Suite [11.4R7.5]
JUNOS Packet Forwarding Engine Support (MX80) [11.4R7.5]
JUNOS Online Documentation [11.4R7.5]
JUNOS Routing Software Suite [11.4R7.5]

masun@solir1 show chassis pic fpc-slot 1 pic-slot 1
FPC slot 1, PIC slot 1 information:
  Type 10x 1GE(LAN) SFP
  StateOnline
  PIC version 2.26
  Uptime 2 days, 23 hours, 7 minutes, 20 seconds

PIC port information:
  FiberXcvr vendor
  Port  Cable typetype  Xcvr vendorpart number   Wavelength
  0 GIGE 1000Tn/a   FINISAR CORP.  FCLF8521P2BTL-TR  n/a

masun@solir1 show interfaces ge-1/1/0
Physical interface: ge-1/1/0, Enabled, Physical link is Up
  Interface index: 158, SNMP ifIndex: 526
  Description: Link sthir2 for Logical Systems (VPLS Lab)
  Link-level type: Ethernet, MTU: 2000, Speed: 1000mbps, BPDU Error: None, 
MAC-REWRITE Error: None,
  Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled, 
Auto-negotiation: Enabled,
  Remote fault: Online
  Device flags   : Present Running
  Interface flags: SNMP-Traps Internal: 0x0
  Link flags : None
  CoS queues : 8 supported, 8 maximum usable queues
  Current address: 08:81:f4:81:26:68, Hardware address: 08:81:f4:81:26:68
  Last flapped   : 2013-03-31 15:11:42 EDT (00:04:46 ago)
  Input rate : 0 bps (0 pps)
  Output rate: 0 bps (0 pps)
  Active alarms  : None
  Active defects : None
  Interface transmit statistics: Disabled

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp