VPLS with RR works very well for me in a small lab (see the example below).
Make sure that your loopbacks are reachable through ldp and mpls is enabled
on the interfaces.
mx5t# top show logical-systems r1 protocols bgp
group rr-client {
type internal;
local-address 172.27.255.1;
Hi,
I've only really dealt with traffic levels under 20Gbps. I have a client
that will be pushing over 100gps, and close to 200 within the next six
months, at least thats the goal. Judging by the type of traffic it is...I
could see it happening. I'm probably in over my head, but thats another
On Tue, 2 Jul 2013, Morgan McLean wrote:
of row switches. This leaves me with hoping that OSPF ECMP works well
enough to push these kinds of traffic levels over a bunch of 10GE links,
OSPF is control plane, it'll set up the ECMP and tell the forwarding plane
about it. On some platforms,
On Jul 2, 2013, at 3:58 PM, Morgan McLean wrote:
I don't know if they will be giving us lags, which means I'll be potentially
running 10-15 BGP sessions per MX, which is a lot of routes.
I'm not a Juniper person, but the MXes aren't the recommended boxes for your
peering/transit edge.
On Tuesday, July 02, 2013 12:56:43 PM Dobbins, Roland wrote:
I'm not a Juniper person, but the MXes aren't the
recommended boxes for your peering/transit edge.
Says who?
They also said the T is a core router. I'm sure I can show
you somehow using it as an edge router or route reflector
:-).
We run pretty much exactly what you describe in the 100Gbps+ scale using
MX480s with RE-S-1800 and don't have any problems.
Contact me off list if you need any tips.
On 13-07-02 04:58 AM, Morgan McLean wrote:
Hi,
I've only really dealt with traffic levels under 20Gbps. I have a client
that
On Jul 2, 2013, at 7:34 PM, Gabriel Blanchard wrote:
We run pretty much exactly what you describe in the 100Gbps+ scale using
MX480s with RE-S-1800 and don't have any problems.
Wow, I had no idea those boxes could handle that level of traffic - thanks for
the clue!
On Jul 2, 2013, at 7:19 PM, Mark Tinka wrote:
Says who?
Doh - MX*480*, not MX*80*. My mistake.
---
Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com
Luck is the residue of opportunity and design.
I'm not a Juniper person, but the MXes aren't the
recommended boxes for your peering/transit edge.
Says who?
They also said the T is a core router. I'm sure I can show
you somehow using it as an edge router or route reflector
In fact I'd say that MXes are an excellent choice for
* Sebastian Wiesinger juniper-...@ml.karotte.org [2013-07-01 12:11]:
Hello,
I need to do a sort of dumb Q-in-Q on a MX box. What I want from
the MX is:
Hello,
a follow up to my question. We decided to do MPLS CCC (as we have a
MPLS enabled core).
It works just fine with RSVP. I'll send
On Jul 2, 2013, at 8:19 PM, sth...@nethelp.no wrote:
In fact I'd say that MXes are an excellent choice for peering/transit
edge.
Yes, I misread the OP's post as saying 'MX80', which is a smaller box, not
'MX480', my mistake.
And what is wrong with the MX80 as a peering/transit router for up to 80Gbps of
traffic?
Thanks,
-Drew
-Original Message-
From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of
Dobbins, Roland
Sent: Tuesday, July 02, 2013 9:01 AM
To: juniper-nsp Puck
Subject: Re:
On Tuesday, July 02, 2013 03:00:30 PM Dobbins, Roland wrote:
Doh - MX*480*, not MX*80*. My mistake.
Well, in fairness, the only thing that kills the MX80 is
that crappy PPC control plane.
If you can keep your peer sessions under control, the
forwarding plane should happily a fair bit of
On Jul 2, 2013, at 8:55 PM, Drew Weaver wrote:
And what is wrong with the MX80 as a peering/transit router for up to 80Gbps
of traffic?
The OP was talking about 200gb/sec or more of traffic, with multiple eBGP
peering relationships.
On Tuesday, July 02, 2013 03:19:27 PM sth...@nethelp.no
wrote:
In fact I'd say that MXes are an excellent choice for
peering/transit edge.
Yes, if you have tons of high speed peering, the MX chassis'
are good if terminating the links on the box directly is
commercially feasible.
Otherwise,
On Tuesday, July 02, 2013 03:55:33 PM Drew Weaver wrote:
And what is wrong with the MX80 as a peering/transit
router for up to 80Gbps of traffic?
If your traffic grows linearly with your peering sessions,
it could come like a deck of cards.
If not, no reason why the MX80 won't push.
Mark.
Just a smaller FIB table is all I know of, we use them in a few places as
transit peering points for the time being. We will probably upgrade to 480's at
some point depending on the amount of routes when Ipv6 kicks in and the amount
of traffic.
Matt
-Original Message-
From:
On (2013-07-02 14:07 +), Haynes, Matthew wrote:
Just a smaller FIB table is all I know of, we use them in a few places as
transit peering points for the time being. We will probably upgrade to 480's
at some point depending on the amount of routes when Ipv6 kicks in and the
amount of
On 03/07/13 00:56, Darius Jahandarie wrote:
On Tue, Jul 2, 2013 at 9:55 AM, Drew Weaver drew.wea...@thenap.com wrote:
And what is wrong with the MX80 as a peering/transit router for up to 80Gbps
of traffic?
The lack of redundant REs + inability to have an external RE.
Other than the amount
You buy 4x MX80s for the price of a fully redundant MX240, 480, etc.
Not saying that price is everything.
-Original Message-
From: Darius Jahandarie [mailto:djahanda...@gmail.com]
Sent: Tuesday, July 02, 2013 10:57 AM
To: Drew Weaver
Cc: Dobbins, Roland; juniper-nsp Puck
Subject: Re:
On Tuesday, July 02, 2013 04:50:43 PM Saku Ytti wrote:
MX104 has faster QorIQ core and 4GB DRAM.
Althought I'd have been happier to see a 1U MX switch-router
from Juniper, the MX104 is a reasonably welcome chassis,
particularly if you're looking at mixed Ethernet and non-
Ethernet (mostly
On Tuesday, July 02, 2013 05:09:44 PM Julien Goodwin wrote:
Other than the amount of CPU capacity on the MX80 (which
has been done to death here) do you really have RE's
fail often enough to be a problem?
When we had chassis-based core switches, we slowly moved
away from dual control planes
And what is wrong with the MX80 as a peering/transit router for up to 80Gbps
of traffic?
Not sure how you count up to 80Gbps. Under normal circumstances you
would use the 4 fixed ports as uplinks, meaning 2x10Gbps towards two
uplink routers. So if you're extremely lucky with your load
On Tuesday, July 02, 2013 05:31:08 PM sth...@nethelp.no
wrote:
Not sure how you count up to 80Gbps.
As a friend of mine used to say, Californian Count :-).
40 * 2
Mark.
signature.asc
Description: This is a digitally signed message part.
Slow control-plane. No RE redundancy. More limited rib fib than
regular MX. Cryptic licensing scheme.
Otherwise nothing really wrong.
Christian
Le 02/07/2013 15:55, Drew Weaver a écrit :
And what is wrong with the MX80 as a peering/transit router for up to 80Gbps of
traffic?
Thanks,
-Drew
Thanks for your response.
Yea, LDP and MPLS are running and reachable (there are test l2circuits
between lab edge routers). I noticed that on your BGP session, you're only
carrying l2vpn family - does it still work if you also carry inet.0 + inet.6
on same the same session?
james
Hello,
Can anyone tell me how to display the output of the following using a commit
script (or other means) every time there's a commit?
show | display set |match deactivate
Thanks,
Serge
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
yes, no problem after I activate inet unicast and inet6 unicast:
mx5t# run show bgp summary logical-system r2
Groups: 1 Peers: 1 Down peers: 0
Table Tot Paths Act Paths SuppressedHistory Damp State
Pending
bgp.l2vpn.0
1 1 0 0
Yup, you're right, there's nothing wrong with the RR behavior.
I found the problem -- it appears that the standard policy-statements I use
to control inet.0 + inet6.0 tables is filtering out L2VPN routes as they are
not tagged with the internal/standard AS communities. I've set those on
Hi all,
I’m really happy to announce that the Bay Area Juniper Users Group (BAJUG4) has
been scheduled for August 28th 2013.
Fun fact: we were breaking the fire code in our previous events because we had
so many people attend. For BAJUG4 I have reserved the Aspiration Dome (Juniper
event
Wow, this thread snowballed into quite the MX80 debate. For the record, I
run two in production where I am employed full time and they perform
beautifully, though woefully underutilized.
Using static routes and /32's as peering endpoints is a great option I
skimmed over, I'll see if the upstream
On Jul 2, 2013, at 1:00 PM, Morgan McLean wrx...@gmail.com wrote:
Any good aggregation switch suggestions? Juniper is doesn't provide good
ports for $ in the switching realmcustomer balked at the cost for a
four port 40G blade on a 9200. Might check out brocade..
If you want to remain
32 matches
Mail list logo