On Thursday, July 22, 2010 10:48:20 pm Pavel Lunin wrote:
> Ok. But let's be honest, it's tricky. Specially in L3
> metro access where it is easy to get a loop if you don't
> rely on what customers announce to you (I had experience
> of spending weeks on phone with major Russian ISP on
> behalf of
On Thursday, July 22, 2010 07:00:45 pm Pavel Lunin wrote:
> The problem with EX in ISP applications is very limited
> MPLS capabilities. So if you want VPLS or L3VPN in some
> remote location, EX will not save you although it has
> way higher forwarding performance.
Yes, this, indeed, was a probl
IIRC, it's quite possible to use closest MX-series router to mix
draft-kompella pseudowire from EX-series into VPLS domain (it was
discussed in this list not so long time ago).
Not sure about L3VPN though.
BTW. Kompella? Didn't you mean CCC? EX only support single label MPLS.
__
Example: you run routed metro (or datacenter) ethernet ring, with less
than 12k entries in FIB. One of your customers wants to turn BGP on his
link. There are lots of choices on how to do that:
[...]
But that is one of cases when having full-view in your edge switch RIB
and redistributed to cust
On Thu, Jul 22, 2010 at 03:00:45PM +0400, Pavel Lunin wrote:
>
> On 22.07.2010 14:33, Alexandre Snarskii wrote:
> >>we also bump into the requirements of cheap devices running everything
> >>including L3VPN/VPLS for a few hundred megs. I would suggest to use
> >>SRX240H in packet mode and don't ev
On 22.07.2010 14:33, Alexandre Snarskii wrote:
we also bump into the requirements of cheap devices running everything
including L3VPN/VPLS for a few hundred megs. I would suggest to use
SRX240H in packet mode and don't even think about full BGP (they can't)
You can try the same trick as w
On 21.07.2010 22:34, Christopher E. Brown wrote:
That is exactly our use, up to a couple hundred megs of IP services on
one, a couple hundred of L3 MPLS on another, and L2-circuit/vpls on a third.
Alaska has many small remote locations. For larger areas, M and MX
platforms are better, and can
On 20 Jul 2010, at 23:14, Christopher E. Brown wrote:
> I know alot of us here have been bitten by this, and the fact that disabling
> flow mode and
> reverting to packet does not free up any of the ~ 460MB or so being eaten by
> fwdd/flowd is
> insane.
> I am currently having the "This is a de
On 7/21/2010 3:47 PM, Heath Jones wrote:
> Chris - Is the current situation: that Juniper have said there is no
> workaround / configuration change that can be made to stop the
> allocation of memory for the flow forwarding information?
The current response is that this memory consumption is "by
Chris - Is the current situation: that Juniper have said there is no
workaround / configuration change that can be made to stop the allocation of
memory for the flow forwarding information?
On 20 July 2010 23:14, Christopher E. Brown wrote:
>
>
> I know alot of us here have been bitten by this,
Just a quick thought... what about renaming the flowd binary..?
No I haven't tested it :)
On 20 July 2010 23:14, Christopher E. Brown wrote:
>
>
> I know alot of us here have been bitten by this, and the fact that
> disabling flow mode and
> reverting to packet does not free up any of the ~ 460
/20/2010 11:14 PM
To: juniper-...@punk.nether.net
Subject: [j-nsp] J series users bitten by the massive memory use increase with
flow mode add, please file jtac cases.
I know alot of us here have been bitten by this, and the fact that disabling
flow mode and
reverting to packet does not free up
[j-nsp] J series users bitten by the massive memory use increase
with flow mode add, please file jtac cases.
Ditto, I was writing basically the same email when I saw your post come
through. I got a set of 2350's out of the box running 60% memory
utilization, I added a l3 vpn (with 3 route
From: juniper-nsp-boun...@puck.nether.net
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Christopher E.
Brown
Sent: Tuesday, July 20, 2010 5:15 PM
To: juniper-...@punk.nether.net
Subject: [j-nsp] J series users bitten by the massive memory use increase
with flow mode add, please file jtac
I know alot of us here have been bitten by this, and the fact that disabling
flow mode and
reverting to packet does not free up any of the ~ 460MB or so being eaten by
fwdd/flowd is
insane.
I am currently having the "This is a design feature, the pre-alloc is planned"
argument
with a SE.
I
15 matches
Mail list logo