Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread James Jun
On Tue, Jun 07, 2016 at 10:56:23PM +, Adam Vitkovsky wrote:
> >
> One thing I'm not clear about MX104 and MX80 is, are there two TRIO chips or 
> just one?

There is only 1 NPU on both platforms.

My problem with MX104 is the same 'practical-use-case' scenario and port 
economics you described.


While MX104 provides simpler design and less things to potentially fail, when I 
need to deploy a peering router with acceptable 10G port density, ASR 9k tends 
to be more cost-effective and useful IMO; and then, there is that slow BGP 
performance issue on MX104, which does not exist on ASR9001.

Saku did raise one important point though a few weeks earlier in c-nsp 
regarding ASR 9001 -- it is 32-bit; and with IOS-XR moving to 64-bit 
architecture, the future of 9001 as a platform is questionable when making new 
purchasing decisions.  It certainly is something to think about.

But let's talk price here, specifically port costs.

To achieve 8x 10GE on MX104, you need built-in 4x10GE unlock port license, and 
then you need two MIC-3D-2XGE-XFP, which by themselves are priced like licenses 
(router ports aren't cheap).  By the time you're done pricing out an MX104 
loaded up to 8x 10GE interfaces with just single RE (but two PSUs), you might 
as well pick up an ASR 9006 with a single MOD80 card to match the configuration 
as closely as possible to a loaded MX104.  This is what we ended up doing to 
replace an aging MX80 router doing peering.  Plus, on 9006, you get a real 
control-plane that is directly comparable to that of MX240/480 (RSP440 or 880) 
that has no problems doing heavy BGP work.

Given that MX104 only has half the NPU of what 9001 offers, plus lower 
bandwidth, plus craptastic convergence speed, I would expect the price to be 
competitively lower, not exactly same or slightly higher than that of 
comparable ASR9k box, which in practice functions just fine doing heavy BGP.  
We don't need GRE tunnels on our peering routers, we just need MPLS 1-label 
imposition/disposition, IGP, fast BGP convergence, and acceptable 10GE port 
costs--that it.. it's not too much to ask for.  For us at least, ASR9K meets 
that; MX104 does not.

James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Anhost
One Trio chip set. 

Sent from my iPhone

On Jun 7, 2016, at 5:56 PM, Adam Vitkovsky  wrote:

>> Ross Halliday
>> Sent: Tuesday, June 07, 2016 10:01 PM
>> To: Saku Ytti
>> Cc: juniper-nsp@puck.nether.net
>> Subject: Re: [j-nsp] MX104 capabilities question
>> 
>> Hi Saku,
>> 
>>> I don't see how this makes it any less of a box, in my mind this makes
>>> it superior box. You lost single PFE/linecard, which happens to be
>>> only linecard you have.
>>> In my mind fabricless single-linecard design is desirable, as it
>>> reduces delay and costs significantly. Not only can you omit fabric
>>> chip, but you can get >2x the ports on faceplate, as no capacity is
>>> wasted on fabric side.
>> 
>> This is a good point but kind of tangential to what I was getting at. Before 
>> we
>> were really familiar with the MX104, we went on sales and marketing
>> material that talked about "the little" MXes and "MXes with multiple slots".
>> It's very misleading. Even JUNOS MX documentation talks about FPCs being
>> separate in control and forwarding plane operations, when in reality there's
>> only AFEB0 and that's the whole box. No isolation, and "slot diversity" is
>> basically only a little bit better than adjacent ports... Again, contrary to 
>> what
>> the popular advice about "multi-slot MX routers" is. The MX104 is not really 
>> a
>> multi-slot router in the traditional sense, it just takes more MICs.
> One thing I'm not clear about MX104 and MX80 is, are there two TRIO chips or 
> just one?
> 
> 
>>> Regarding PR1031696, years ago I had bunch of 3rd party SFPs which
>>> would crash MX PFE. I practically begged JTAC to fix it. The issue was
>>> caused by SFP being sluggish to answer to I2C polling, and the code
>>> which was expecting an answer crashed when it couldn't receive I2C
>>> answer fast enough. I tried to explain to them, it's only matter of
>>> time before original SFP develops I2C error, at which point you'll see
>>> this from customer buying 1st party optics. JTAC was unconvinced, told
>>> me to re-open if I see it on 1st party.
>>> I used many channels to complain, but no avail. To me this was
>>> absolutely appalling and short-sighted behaviour.
>> 
>> Yes, and then it crashes every single SFP... brilliant design backed with
>> brilliant support... give me a break!
>> 
>>> But all platforms can have all kind of problems, and if you

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Adam Vitkovsky
> Ross Halliday
> Sent: Tuesday, June 07, 2016 10:01 PM
> To: Saku Ytti
> Cc: juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] MX104 capabilities question
>
> Hi Saku,
>
> > I don't see how this makes it any less of a box, in my mind this makes
> > it superior box. You lost single PFE/linecard, which happens to be
> > only linecard you have.
> > In my mind fabricless single-linecard design is desirable, as it
> > reduces delay and costs significantly. Not only can you omit fabric
> > chip, but you can get >2x the ports on faceplate, as no capacity is
> > wasted on fabric side.
>
> This is a good point but kind of tangential to what I was getting at. Before 
> we
> were really familiar with the MX104, we went on sales and marketing
> material that talked about "the little" MXes and "MXes with multiple slots".
> It's very misleading. Even JUNOS MX documentation talks about FPCs being
> separate in control and forwarding plane operations, when in reality there's
> only AFEB0 and that's the whole box. No isolation, and "slot diversity" is
> basically only a little bit better than adjacent ports... Again, contrary to 
> what
> the popular advice about "multi-slot MX routers" is. The MX104 is not really a
> multi-slot router in the traditional sense, it just takes more MICs.
>
One thing I'm not clear about MX104 and MX80 is, are there two TRIO chips or 
just one?


> > Regarding PR1031696, years ago I had bunch of 3rd party SFPs which
> > would crash MX PFE. I practically begged JTAC to fix it. The issue was
> > caused by SFP being sluggish to answer to I2C polling, and the code
> > which was expecting an answer crashed when it couldn't receive I2C
> > answer fast enough. I tried to explain to them, it's only matter of
> > time before original SFP develops I2C error, at which point you'll see
> > this from customer buying 1st party optics. JTAC was unconvinced, told
> > me to re-open if I see it on 1st party.
> > I used many channels to complain, but no avail. To me this was
> > absolutely appalling and short-sighted behaviour.
>
> Yes, and then it crashes every single SFP... brilliant design backed with
> brilliant support... give me a break!
>
> > But all platforms can have all kind of problems, and if you would have
> > multiple linecards, sure, in this case you'd only crash one of them.
> > But just having multiple linecards won't help that much, you can still
> > crash all linecards due to RE problem, so you're still going to need
> > second router for proper redundancy, at which point it becomes
> > immaterial if you have this 'linecard redundancy' or not.
>
> All kinds of problems happen, yes the only "real" safeguard is to put every
> customer on their own PE. You might remember a previous conversation we
> had regarding the DDoS Protection mechanism. This thing is a major thorn in
> my side. Thanks to this "faster" design, when one of these filters kicks in, 
> any
> traffic matching that class on the ENTIRE box is blackholed. I don't think 
> this is
> appropriate behaviour: In my experience, it actually *creates* a DoS
> situation on these boxes.
>
Hmm that's a good point actually, haven’t realised that.
Since the first level at which the policers are applied is per LU that really 
makes a difference whether the box has just one or two LUs.

I really feel like cisco dropped the ball with RSP2 for ASR903 -heck if they 
would allow at least 2M of routes it would be no brainer, compared to MX104.


adam







Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
---
 This email has been scanned for email related threats and delivered safely by 
Mimecast.
 For more information please visit http://www.mimecast.com
---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] 6VPE routes learned and hidden - ACX5048

2016-06-07 Thread Aaron
Yep, that was it, thanks !

set routing-instances one routing-options auto-export family inet6 unicast

set routing-instances three routing-options auto-export family inet6 unicast

I had to do it in both vrf's.  I tried it in one, and then the other, and
route didn't show up...  only when I did it in both did the route show up

- Aaron

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Ross Halliday
Hi Saku,

> I don't see how this makes it any less of a box, in my mind this makes
> it superior box. You lost single PFE/linecard, which happens to be
> only linecard you have.
> In my mind fabricless single-linecard design is desirable, as it
> reduces delay and costs significantly. Not only can you omit fabric
> chip, but you can get >2x the ports on faceplate, as no capacity is
> wasted on fabric side.

This is a good point but kind of tangential to what I was getting at. Before we 
were really familiar with the MX104, we went on sales and marketing material 
that talked about "the little" MXes and "MXes with multiple slots". It's very 
misleading. Even JUNOS MX documentation talks about FPCs being separate in 
control and forwarding plane operations, when in reality there's only AFEB0 and 
that's the whole box. No isolation, and "slot diversity" is basically only a 
little bit better than adjacent ports... Again, contrary to what the popular 
advice about "multi-slot MX routers" is. The MX104 is not really a multi-slot 
router in the traditional sense, it just takes more MICs.
 
> Regarding PR1031696, years ago I had bunch of 3rd party SFPs which
> would crash MX PFE. I practically begged JTAC to fix it. The issue was
> caused by SFP being sluggish to answer to I2C polling, and the code
> which was expecting an answer crashed when it couldn't receive I2C
> answer fast enough. I tried to explain to them, it's only matter of
> time before original SFP develops I2C error, at which point you'll see
> this from customer buying 1st party optics. JTAC was unconvinced, told
> me to re-open if I see it on 1st party.
> I used many channels to complain, but no avail. To me this was
> absolutely appalling and short-sighted behaviour.

Yes, and then it crashes every single SFP... brilliant design backed with 
brilliant support... give me a break!

> But all platforms can have all kind of problems, and if you would have
> multiple linecards, sure, in this case you'd only crash one of them.
> But just having multiple linecards won't help that much, you can still
> crash all linecards due to RE problem, so you're still going to need
> second router for proper redundancy, at which point it becomes
> immaterial if you have this 'linecard redundancy' or not.

All kinds of problems happen, yes the only "real" safeguard is to put every 
customer on their own PE. You might remember a previous conversation we had 
regarding the DDoS Protection mechanism. This thing is a major thorn in my 
side. Thanks to this "faster" design, when one of these filters kicks in, any 
traffic matching that class on the ENTIRE box is blackholed. I don't think this 
is appropriate behaviour: In my experience, it actually *creates* a DoS 
situation on these boxes.

These routers have their place, they're definitely a Swiss Army Knife type of 
machine, it's just that the handle is really small...

Oh - and something I forgot to mention in my original email: The MX104 doesn't 
support ISSU like the "real" MX routers do. ISSU isn't an option until 
somewhere in the 14.x train, I think, and JTAC's recommended stable release is 
still in the 13.x train. Kind of ticked about that one.

Cheers
Ross


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Commit script portability between ELS and non-ELS platforms

2016-06-07 Thread Rob Foehl
Does anyone have any clever methods for probing Enhanced Layer 2 Software 
support from a commit script on QFX/EX in order to generate changes 
appropriate to the platform?  Specifically looking for something beyond 
checking hardware and version numbers, or for pieces of config hierarchy 
that might not be present on any given box either way.


Thanks!

-Rob
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 6VPE routes learned and hidden - ACX5048

2016-06-07 Thread Hugo Slabbert
So, you're trying to leak between local VRFs on the same ACX?  With prefix 
1234:5678:0:7::/64 originating in "three" and you want to leak locally into 
"one"?


I know local leaking isn't possible on e.g. EX4550, but I don't know if 
that same limitation applies to the ACX.  If you *are* able to leak locally 
between VRFs, you'll need auto-export in the VRF to make it work:


https://www.juniper.net/documentation/en_US/junos15.1/topics/reference/configuration-statement/auto-export-edit-routing-options.html
https://www.juniper.net/documentation/en_US/junos12.3/topics/example/auto-export-configuring-verifying.html

--
Hugo Slabbert   | email, xmpp/jabber: h...@slabnet.com
pgp key: B178313E   | also on Signal

On Tue 2016-Jun-07 15:34:50 -0500, Aaron  wrote:


Next subtopic related to 6VPE in my ACX5048 please...

this is done on one ACX5048... so both vrf "three" and "one" are on same
ACX5048 PE

i advertise a ipv6 prefix into a vrf named "three" with RT 1:1 and 3:3

i try to receive that same ipv6 prefix into a vrf named "one" with RT 1:1...
it isn't showing up.  any idea why?

agould@eng-lab-5048-2# run show route advertising-protocol bgp 10.101.0.1
table three.inet6.0

three.inet6.0: 7 destinations, 15 routes (7 active, 0 holddown, 0 hidden)
 Prefix  Nexthop  MED LclprefAS path
* 1234:5678:0:7::/64  Self 100I

{master:0}[edit]
agould@eng-lab-5048-2# run show route receive-protocol bgp 10.101.0.1 table
one.inet6.0

one.inet6.0: 4 destinations, 12 routes (4 active, 0 holddown, 0 hidden)
 Prefix  Nexthop  MED LclprefAS path
 ::/0:::10.101.0.21001234 I
 :::10.101.0.532  10056789 I
 :::10.101.0.10   100139 I
* 1234:5678::/32  :::10.101.0.10   0   100I
* 1234:5678:0:5::/64  :::10.101.0.254  0   100?
* 1234:5678:0:6::/64  :::10.101.12.100 0   100?

{master:0}[edit]
agould@eng-lab-5048-2# run show route advertising-protocol bgp 10.101.0.1
table three.inet6.0 detail | grep arg
Communities: target:1:1 target:3:3

{master:0}[edit]
agould@eng-lab-5048-2#

{master:0}[edit]
agould@eng-lab-5048-2# run show route receive-protocol bgp 10.101.0.1 table
one.inet6.0 detail | grep arg
Communities: target:1:1
Communities: target:1:1
Communities: target:1:1
Communities: target:1:1
Communities: target:1:1
Communities: target:1:1


- Aaron



signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] 6VPE routes learned and hidden - ACX5048

2016-06-07 Thread Tarko Tikan

hey,


this is done on one ACX5048... so both vrf "three" and "one" are on same
ACX5048 PE

i advertise a ipv6 prefix into a vrf named "three" with RT 1:1 and 3:3

i try to receive that same ipv6 prefix into a vrf named "one" with RT 1:1...
it isn't showing up.  any idea why?


For prefix leaking between vrf's in the same PE, you have to add 
"auto-export"


https://www.juniper.net/techpubs/en_US/junos12.3/information-products/topic-collections/nce/auto-export-understanding/auto-export-understanding.pdf

--
tarko
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 6VPE routes learned and hidden - ACX5048

2016-06-07 Thread Aaron
Next subtopic related to 6VPE in my ACX5048 please...

this is done on one ACX5048... so both vrf "three" and "one" are on same
ACX5048 PE

i advertise a ipv6 prefix into a vrf named "three" with RT 1:1 and 3:3

i try to receive that same ipv6 prefix into a vrf named "one" with RT 1:1...
it isn't showing up.  any idea why?

agould@eng-lab-5048-2# run show route advertising-protocol bgp 10.101.0.1
table three.inet6.0

three.inet6.0: 7 destinations, 15 routes (7 active, 0 holddown, 0 hidden)
  Prefix  Nexthop  MED LclprefAS path
* 1234:5678:0:7::/64  Self 100I

{master:0}[edit]
agould@eng-lab-5048-2# run show route receive-protocol bgp 10.101.0.1 table
one.inet6.0

one.inet6.0: 4 destinations, 12 routes (4 active, 0 holddown, 0 hidden)
  Prefix  Nexthop  MED LclprefAS path
  ::/0:::10.101.0.21001234 I
  :::10.101.0.532  10056789 I
  :::10.101.0.10   100139 I
* 1234:5678::/32  :::10.101.0.10   0   100I
* 1234:5678:0:5::/64  :::10.101.0.254  0   100?
* 1234:5678:0:6::/64  :::10.101.12.100 0   100?

{master:0}[edit]
agould@eng-lab-5048-2# run show route advertising-protocol bgp 10.101.0.1
table three.inet6.0 detail | grep arg
 Communities: target:1:1 target:3:3

{master:0}[edit]
agould@eng-lab-5048-2#

{master:0}[edit]
agould@eng-lab-5048-2# run show route receive-protocol bgp 10.101.0.1 table
one.inet6.0 detail | grep arg
 Communities: target:1:1
 Communities: target:1:1
 Communities: target:1:1
 Communities: target:1:1
 Communities: target:1:1
 Communities: target:1:1


- Aaron

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Timothy Creswick
> So the next bump up (which is an investment no doubt) is the larger MX
> series. The MX240 has two card slots available. Using 16x10G or 32x10G will
> yield a nice port density. One school of thought is since the MX480 bare
> chassis is not much more than that of the MX240, it makes more ROI sense
> just to opt for the larger chassis. YMMV

Don't forget that the support is going to be higher on the 480 chassis. 

Also, technically you can run 3 MPCs in the the MX240 if you only run one RE. 
We do this, and then we deploy the MX240s in pairs which provides full chassis 
redundancy which makes us much more comfortable.

This would be hard to do with the MX480 since the chassis is physically so much 
bigger - two in a rack is a huge amount of space.

T
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Ralph E. Whitmore, III
I thank everyone for their thoughts and comments, they do indeed jive with what 
I had already thought about the product.   As long as the MX104 is capable of 
handling the 5 full table BGP peers (slow I understand) I think its worth 
rolling one as it has to be better them my Sup720CXL’s which forces us to take 
smaller tables already (we are currently having to take only /20 and greater + 
default) from our providers as the 720 just cant hang at full tables.In our 
particular application, we do not have any significant traffic (less than 1 Gb 
aggregate)  but large companies like redundancy so this is a good alternative.  
If we were routing 20-30Gb /sec I think I might make the call to move to the 
MX240 but the cost doesn’t justify the end result in this case.   I will offer 
the bosses the alternative MX240 but doubt they will like the cost.  I guess we 
can say its all about compromises everywhere.

Thanks again to the list.

Ralph

From: Bill Blackford [mailto:bblackf...@gmail.com]
Sent: Tuesday, June 7, 2016 11:14 AM
To: Ralph E. Whitmore, III 
Cc: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] MX104 capabilities question

A lot of folks have responded to this so at the risk of belaboring the point, 
yes the MX104 is very slow to load the FIB or during a churn. But forwarding 
performance is solid!

Port density is an issue. MX104 can hold up to 12 ports @10G and has no issues 
forming LAG's across each MIC or in combination with the on-board ports, but 
maxes out at 80Gbps so with a 1:1 subscription, you're limited to 8 10G ports. 
You can run out of those very quickly particularly if it's a peering router and 
you opt for direct PNI's as time goes on and scale increases. Adding a second 
MX104 doesn't help much because once you make all of the needed redundant 
interconnects, you're still very limited on ports.

So the next bump up (which is an investment no doubt) is the larger MX series. 
The MX240 has two card slots available. Using 16x10G or 32x10G will yield a 
nice port density. One school of thought is since the MX480 bare chassis is not 
much more than that of the MX240, it makes more ROI sense just to opt for the 
larger chassis. YMMV




On Mon, Jun 6, 2016 at 1:01 AM, Ralph E. Whitmore, III 
mailto:ral...@interworld.net>> wrote:
I am in the process of replacing my old cisco650x hardware and was steered to 
this list to pose the following questions:

I have 4 primary BGP transits  each delivering 600k+ routes to me and we will 
be adding another probably 600k+peer in the near future.  The sales rep 
recommended the MX 104 to us first, but then came back to us and said "Sorry 
this router isn't adequate for your needs you need to be in the MX240 Chassis" 
I read the spec's I can find and it says from a routing engine perspective 
(RE-S-MX104)  that it will handle the routes with room to grow on."

From Juniper:
IPv4 unicast FIB 1 million
IPv6 unicast FIB  512K

Ipv4 RIB   4 million
IPv6 RIB  3 million


So the question is:  is there some other limiting factor(s)  that should steer 
me away from the MX104 to the MX240 Chassis? Is the sales rep blowing smoke?  I 
am hoping to find someone here who has tried this config and will either say 
yes this is great solution or  OMG, I'd never try that again.

Thanks

Ralph
___
juniper-nsp mailing list 
juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



--
Bill Blackford

Logged into reality and abusing my sudo privileges.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Bill Blackford
A lot of folks have responded to this so at the risk of belaboring the
point, yes the MX104 is very slow to load the FIB or during a churn. But
forwarding performance is solid!

Port density is an issue. MX104 can hold up to 12 ports @10G and has no
issues forming LAG's across each MIC or in combination with the on-board
ports, but maxes out at 80Gbps so with a 1:1 subscription, you're limited
to 8 10G ports. You can run out of those very quickly particularly if it's
a peering router and you opt for direct PNI's as time goes on and scale
increases. Adding a second MX104 doesn't help much because once you make
all of the needed redundant interconnects, you're still very limited on
ports.

So the next bump up (which is an investment no doubt) is the larger MX
series. The MX240 has two card slots available. Using 16x10G or 32x10G will
yield a nice port density. One school of thought is since the MX480 bare
chassis is not much more than that of the MX240, it makes more ROI sense
just to opt for the larger chassis. YMMV




On Mon, Jun 6, 2016 at 1:01 AM, Ralph E. Whitmore, III <
ral...@interworld.net> wrote:

> I am in the process of replacing my old cisco650x hardware and was steered
> to this list to pose the following questions:
>
> I have 4 primary BGP transits  each delivering 600k+ routes to me and we
> will be adding another probably 600k+peer in the near future.  The sales
> rep recommended the MX 104 to us first, but then came back to us and said
> "Sorry this router isn't adequate for your needs you need to be in the
> MX240 Chassis" I read the spec's I can find and it says from a routing
> engine perspective (RE-S-MX104)  that it will handle the routes with room
> to grow on."
>
> From Juniper:
> IPv4 unicast FIB 1 million
> IPv6 unicast FIB  512K
>
> Ipv4 RIB   4 million
> IPv6 RIB  3 million
>
>
> So the question is:  is there some other limiting factor(s)  that should
> steer me away from the MX104 to the MX240 Chassis? Is the sales rep blowing
> smoke?  I am hoping to find someone here who has tried this config and will
> either say yes this is great solution or  OMG, I'd never try that again.
>
> Thanks
>
> Ralph
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>



-- 
Bill Blackford

Logged into reality and abusing my sudo privileges.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Saku Ytti
On 7 June 2016 at 17:43, Adam Vitkovsky  wrote:

> Alright but isn't running GRE tunnels over limited MTU with a need to
> reassemble fragments rather special case?

I mean those as separate datapoints.Typhoon will crap out at like
12Mpps of GRE, far cry from Trio. And all fragmentation will be punted
to LC CPU, where as Trio does it HW.


-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Adam Vitkovsky
> From: Saku Ytti [mailto:s...@ytti.fi]
> Sent: Tuesday, June 07, 2016 2:56 PM
> To: Adam Vitkovsky
> Cc: Ross Halliday; juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] MX104 capabilities question
>
> On 7 June 2016 at 11:16, Adam Vitkovsky 
> wrote:¨
>
> > These are all valid theoretical constrains.
> > Yet MX104/MX80 system capacity is 80Gbps and ASR9k1 is 120Gbps.
>
> Because ASR9k1 has 2 NPUs.
>
> > And as we all know if you shift from the ideal packet size and pure IP
> > routing the forwarding performance deteriorates more quickly on
> > juniper NPs compared to cisco NPs.
>
> Citation needed. It's not black and white. ASR9k can't do defrag on HW, all is
> punted, Trio does at very reasonable rate in HW. Trio has much better GRE
> performance .
>
Alright but isn't running GRE tunnels over limited MTU with a need to 
reassemble fragments rather special case?

For BAU Internet edge implementations you need fast BGP(control-plane) decent 
forwarding performance while edge/DoS filters are on(yup TCAM helps with that a 
lot, although you can't get crazy with the length of the filters) and fast RIB 
to FIB programing (yes, one could argue that cisco is not quite running circles 
around juniper, but on that note XR/XE/IOS supports PIC for pure IPv4 so no 
need to get your hands dirty with vpnv4 if you need a simple workaround for 
slow RIB to FIB programing) and a decent netflow (yes I'm aware it has its 
issues on XR).


> > Also the RP on ASR9k1 is faster than one used in MX104.
>
> The HW itself on MX104 is faster, ASR9k1 is P4040 I believe, MX104 is P5021.
> But of course that's not full truth. For example RSP720 is slower CPU than
> MX104, but RSP720 control-plane runs circles around MX104. Why JunOS is so
> dog slow, particularly on PPC, I have no idea.
>
Well I'd have couple, but none would benefit this particular discussion.

> > So I'd say ASR9k1 is better box than MX104/MX80.
>
> I wouldn't, but I accept it's opinion not fact.
>
I stated couple facts above why if selecting a router for Internet edge it is a 
clear cut to me.


adam






Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
---
 This email has been scanned for email related threats and delivered safely by 
Mimecast.
 For more information please visit http://www.mimecast.com
---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Saku Ytti
On 7 June 2016 at 11:16, Adam Vitkovsky  wrote:¨

> These are all valid theoretical constrains.
> Yet MX104/MX80 system capacity is 80Gbps and ASR9k1 is 120Gbps.

Because ASR9k1 has 2 NPUs.

> And as we all know if you shift from the ideal packet size and pure IP
> routing the forwarding performance deteriorates more quickly on juniper NPs
> compared to cisco NPs.

Citation needed. It's not black and white. ASR9k can't do defrag on
HW, all is punted, Trio does at very reasonable rate in HW. Trio has
much better GRE performance .

> Also the RP on ASR9k1 is faster than one used in MX104.

The HW itself on MX104 is faster, ASR9k1 is P4040 I believe, MX104 is
P5021. But of course that's not full truth. For example RSP720 is
slower CPU than MX104, but RSP720 control-plane runs circles around
MX104. Why JunOS is so dog slow, particularly on PPC, I have no idea.

> So I'd say ASR9k1 is better box than MX104/MX80.

I wouldn't, but I accept it's opinion not fact.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX104 capabilities question

2016-06-07 Thread Adam Vitkovsky
> Of Saku Ytti
> Sent: Tuesday, June 07, 2016 7:02 AM
>
> On 7 June 2016 at 03:09, Ross Halliday
>  wrote:
>
> Hey,
>
> In my mind fabricless single-linecard design is desirable, as it reduces delay
> and costs significantly. Not only can you omit fabric chip, but you can get 
> >2x
> the ports on faceplate, as no capacity is wasted on fabric side.
> Because of this design choice, I think MX80/MX104 is better box with smaller
> BOM than ASR9001.
>
These are all valid theoretical constrains.
Yet MX104/MX80 system capacity is 80Gbps and ASR9k1 is 120Gbps.
And as we all know if you shift from the ideal packet size and pure IP routing 
the forwarding performance deteriorates more quickly on juniper NPs compared to 
cisco NPs.
Also the RP on ASR9k1 is faster than one used in MX104.
So I'd say ASR9k1 is better box than MX104/MX80.


adam








Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
---
 This email has been scanned for email related threats and delivered safely by 
Mimecast.
 For more information please visit http://www.mimecast.com
---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp