Re: [j-nsp] ACX5448 & ACX710 - Update!

2020-07-30 Thread Luis Balbinot
I work with telecom companies for years and DC is the standard for pretty
much all of them. If you have a small shelter or container you can deploy
an UPS DC system with a handful of batteries that will last for hours and
will not take much space. Look inside a mobile node B station and you’ll
only find DC power.

All major IDCs will provide you -48VDC too.

And to save a bit of money on power efficiency during the years.

Luis

On Thu, 30 Jul 2020 at 07:38 Baldur Norddahl  wrote:

>
>
> On 30.07.2020 10.29, Mark Tinka wrote:
> > The ACX710 was clearly built for one or two mobile network operators.
> > There is no doubt about that.
> >
> > Juniper have been making boxes that support both AC and DC for yonks.
> > Hardened and regular. What's so special about the ACX710? In 2020?
> >
>
> To be fair there are more than two Juniper customers world wide that are
> using 48V DC. To my knowledge DC power is very common in the telco world.
>
> What is special about ACX710 is probably the price point. They want a
> device for a certain market without loosing the ability to sell a higher
> priced device for another market.
>
> Regards,
>
> Baldur
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Luis Balbinot
The MPC7E-MRATE is only good if you have to add a few 100G ports to a large
chassis (i.e. MX960) that has lots of 10G interfaces and/or service cards.
It's about 2/3 of the price of a new MX10003 with 12x100G.

On Wed, Mar 4, 2020 at 12:45 PM Mark Tinka  wrote:

>
>
> On 4/Mar/20 17:18, Tom Beecher wrote:
> > Likely, but if you only need like 4  :)
>
> Then try the MPC7E :-). Cheaper than the MPC10E.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX5448 & ACX710

2020-01-21 Thread Luis Balbinot
The 5448 and the 5048 are quite different. I have several 5048 in my plant
and when we questioned Juniper about a replacement with 100G interfaces
their engineers compared the config template from our 5048s and said the
5448 wasn't capable of doing some of the RSVP and RPM stuff we were doing
on the 5048. This was about 6 months ago.

Luis

On Tue, Jan 21, 2020 at 4:45 PM Aaron Gould  wrote:

> I've had an ACX5448 in my lab on loaner for over a year.  I need to refresh
> myself on how well it performed.  I have the little-brother ACX5048,
> probably 50 of them all over my network doing quite well.  Pretty sure
> those
> are not Trio based.
>
> Never heard of the ACX710, but see it in slide 22 here ...
>
> https://senetsy.ru/upload/juniper-summit-2019/5G-ready_Transport_Networks_Ev
> genii_Bugakov_Juniper.pdf
> 
> ACX710 and ACX753.  I'm curious about interfaces and modules and
> capabilities of both of them.
>
> -Aaron
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] ALB on PTX

2019-11-22 Thread Luis Balbinot
Hey.

Anyone else using ALB on PTX boxes (10K)? We ran into some balancing issues
on a specific link and looking at the counters we don't see any counters
incrementing. Is this expected somehow? It's a regular p2p circuit, no
vlans or anything.

> show interfaces ae4 extensive | match Adapt
Adaptive Statistics:
Adaptive Adjusts:  0
Adaptive Scans  :  0
Adaptive Updates:  0

Thanks.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] PCS errors with PTX box

2019-08-19 Thread Luis Balbinot
Hey.

Anyone here using PTX1Ks with multiple 100G LR4 links and third party optics?

We recently started deploying a few PTX1K routers in some locations
and we are getting some weird PCS errored blocks on LR4 interfaces. We
haven't tested with the official Juniper QSFP28 module yet, but we
tried with two different brands. Apparently the issue only happens
with LR4 interfaces.

We are on JunOS 17.4R2-S2.3. JTAC didn't find anything yet and our SE
even suggested replacing the box but we get that same issue with
multiple boxes. If we play around with the interfaces (i.e. change
from port 25 to 67) sometimes we get rid of the errors on that link.

Thanks.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-19 Thread Luis Balbinot
Same. Juniper is running WAY too late on an ACX5048 replacement with
100G interfaces. We had great expectations for the ACX5448 until we
saw the price list being 3-4x higher than the 5048.

Regarding the original question, I'd also check the MPC5 if your
budget is restricted and you have slots to spare. You can get 12x10G
and 3x40G if you only need to serve that one customer over 40G.
Juniper's pricing for the MPC7-MRATE is also ridiculous at 2x the
price of an MX204.

On Fri, Jul 19, 2019 at 10:27 AM Aaron Gould  wrote:
>
> My ISP network is core/agg mpls rings of MX960's and ACX5048's960's
> connect 40 gig to 5048's using the MPC7E-MRATE in the MX960.
>
> Seems good to me so far
>
> Also use MX960 40 gig on MPC7E-MRATE to DC/CDN deployments of QFX5120's
> (pure Ethernet tagging).
>
> -Aaron
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5100 red alarm after power-off

2019-02-14 Thread Luis Balbinot
Ask your SE, it might be faster.

But sometimes the piece of software that actually control those LEDs
(and the whole chassis) runs on the OS that you have just powered off.
Maybe the red LED means that there's no chassis mgmt at all. That's
why we get extremely loud fans on power up and they calm down as soon
as the chassis mgmt process kicks in.

Luis

On Thu, Feb 14, 2019 at 3:57 PM Alain Hebert  wrote:
>
>  Then you should open a ticket with the JTAC.
>
> -
> Alain Hebertaheb...@pubnix.net
> PubNIX Inc.
> 50 boul. St-Charles
> P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
> Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443
>
> On 2/14/19 11:57 AM, Giovanni Bellac via juniper-nsp wrote:
> >
> > Hi,
> > it matters for us, because I want to know the root cause of the red alarm 
> > LED when powering it down with an enabled management port.
> > Kind regards
> >  Am Donnerstag, 14. Februar 2019, 13:12:13 MEZ hat Anderson, Charles R 
> >  Folgendes geschrieben:
> >
> >   Why does it matter since the switch is halted/powered off anyway?
> >
> > On Thu, Feb 14, 2019 at 08:31:43AM +, Giovanni Bellac via juniper-nsp 
> > wrote:
> >>Hi,
> >> we are issuing the command (request system power-off) when the switch is 
> >> booted up normally. We are experiencing this on different JunOS versions 
> >> (14.1, 17.3, 18.1).
> >>  From our last email:
> >> Short update:We experiencing the red alarm only when the management port 
> >> is connected when powering off the QFX5100-48T - weird...When the 
> >> management port is not connected when powering down the device, there is 
> >> no red alarm.
> >> Kind regards
> >>
> >>
> >>  Am Mittwoch, 13. Februar 2019, 16:22:59 MEZ hat Anderson, Charles R 
> >>  Folgendes geschrieben:
> >>
> >>On Wed, Feb 13, 2019 at 11:08:16AM +, Giovanni Bellac via 
> >> juniper-nsp wrote:
> >>> after powering off a QFX5100-48T (request system power-off) the fans are 
> >>> spinning down and the ALARM LED is lightning red. The switch is working 
> >>> and looking as expected without any error messages.
> >>>
> >>> Is this a normal behavior ? Has someone a spare unit for a short test ?
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Avoid transit LSPs

2019-01-29 Thread Luis Balbinot
Yes it can, but it won't scale. There are lots of PE routers with the same
situation.

I was expecting for a simple configuration knob to avoid detour LSPs but
there seems to be none.

Breaking the OSPF areas could be an option but I don't see anything working
differently if I export the TED via BGP to those routers as the CSFP would
find its way through those routers once again.

Today I don't have any issues because all PE routers are using tunneled
LDP. But since there's no flexibility on how we can control which FEC takes
each tunnel I'm having to extend RSVP all the way to the PEs.

Luis

On Tue, 29 Jan 2019 at 07:03  wrote:

> > From: Luis Balbinot 
> > Sent: Monday, January 28, 2019 1:39 PM
> >
> > I have many LSPs from P1 to P4 and all have FRR protection (Juniper FRR,
> 1:1).
> > Even with two distinct paths from P1 to P4 (both with much lower IGP
> > metrics) I get some detour LSPs setup on PE1. PE1 is a low-end ACX5k with
> > very limited transit capabilities and I definitely don't want any
> transit traffic
> > through it.
> >
> > I could color the green links and call them "p-pe" for example but I
> can't just
> > exclude all "p-pe" links from the FRR selection. If I setup an LSP from
> P1 to
> > PE1 the CSPF path will be P1->P2->PE1 and I'll need the P1->P2->P3->PE1
> > path for protection, but that can't be setup because it has a "p-pe"
> member
> > in the path.
> >
> > P2->PE1 and P3->PE1 share the same fate as P2->P3, but even with SRLG
> > it is not guaranteed that LSPs won't be setup over the PE1 circuits.
> >
> I think what could be done in this topology is to mark the P2-PE1 and
> P3-PE1 links with special colour (say green for the sake of this example).
> 1) Now as for the RSVP-TE signalled LSPs between P routers (e.g. between
> P1 and P4)
> Those would be set up to exclude green colour when signalling their detour
> LSPs.
> [edit protocols mpls label-switched-path LSP_P1_TO_P4]
> set fast-reroute exclude GREEN
> 2) only LSPs to PE1 will have no such constrain configured for their
> detour LSPS.
> And thus will be able to use the P2-PE1 and P3-PE1 links in order to
> protect the last leg of the path towards PE1
>
> adam
>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Avoid transit LSPs

2019-01-25 Thread Luis Balbinot
I've tried that before but those settings under RSVP don't seem to
affect FRR LSPs. I tried that at the node I don't want to allow
transit LSPs and at the previous node to that.

Am I missing something? There has to be a way to avoid it. I have
around 5000 tunnels in my production network and I don't want them
accidentally going through an ACX5k box, for instance, in case of a
catastrophic failure.

Luis


Luis

On Fri, Jan 25, 2019 at 4:32 PM Tom Beecher  wrote:
>
> So I missed your specific comment about being concerned about the bypass LSPs.
>
> I think (although I haven't tested this in forever) if you enable 
> no-node-protection under RSVP , that will prevent those interfaces from being 
> available for any node or link bypass LSP to use.
>
> On Fri, Jan 25, 2019 at 11:51 AM Luis Balbinot  wrote:
>>
>> Please let me know if you find some other approach.
>>
>> The overload bit helps but in the absence of another path the RSVP FRR
>> mechanism will setup a bypass LSP through a node with the overload bit
>> set. And link coloring does not help, at least in my case.
>>
>> Luis
>>
>> On Fri, Jan 25, 2019 at 11:20 AM Jason Lixfeld  wrote:
>> >
>> > I’m testing a similar approach (except using the ISIS overload bit) that 
>> > aims to prevent the path between a pair of LSRs via the links to and 
>> > through my RRs from being considered as a possible transit path.  Seems to 
>> > work just fine in the lab.
>> >
>> > > On Jan 24, 2019, at 3:24 PM, Luis Balbinot  wrote:
>> > >
>> > > That’s a good idea. I’m not 100% sure that this will prevent the creation
>> > > of bypass LSPs but I’ll give it a try.
>> > >
>> > > Thanks!
>> > >
>> > > Luis
>> > >
>> > > On Thu, 24 Jan 2019 at 18:01 Colby Barth  wrote:
>> > >
>> > >> Luis-
>> > >>
>> > >> You could probably set the overload bit.
>> > >>
>> > >> -Colby
>> > >>
>> > >> On 1/24/19, 1:10 PM, "juniper-nsp on behalf of Dave Bell" <
>> > >> juniper-nsp-boun...@puck.nether.net on behalf of m...@geordish.org> 
>> > >> wrote:
>> > >>
>> > >>I'm not aware of any option that will do this.
>> > >>
>> > >>The three solutions that I can think of are:
>> > >>Link colouring like Adam suggests
>> > >>An explicit path that avoids the interfaces you are worried about
>> > >>Set the RSVP cost for the interfaces really high
>> > >>
>> > >>Dave
>> > >>
>> > >>On Thu, 24 Jan 2019 at 17:01, Luis Balbinot 
>> > >> wrote:
>> > >>
>> > >>> It's a permanent thing.
>> > >>>
>> > >>> These boxes are PE routers that are not supposed to handle transit
>> > >>> traffic but during certain network events a few FRR bypass LSPs are
>> > >>> established through them because that's the only remaining path.
>> > >>>
>> > >>> Something easier like a "no-eligible-backup" toggle like the one we
>> > >>> have with OSPF LFA would be nice.
>> > >>>
>> > >>> Luis
>> > >>>
>> > >>> On Thu, Jan 24, 2019 at 2:53 PM 
>> > >> wrote:
>> > >>>>
>> > >>>>> Luis Balbinot
>> > >>>>> Sent: Thursday, January 24, 2019 4:45 PM
>> > >>>>>
>> > >>>>> Hi.
>> > >>>>>
>> > >>>>> How could I prevent a device from getting transit RSVP LSPs being
>> > >>>>> established through it? I only want it to accept ingress LSPs
>> > >> destined
>> > >>> to
>> > >>>> that
>> > >>>>> box.
>> > >>>>>
>> > >>>> If this is a permanent thing,
>> > >>>> You could create a colouring scheme where all links connected to
>> > >> this
>> > >>> node
>> > >>>> have to be avoided by all LSPs with the exception of LSPs
>> > >> terminated on
>> > >>> this
>> > >>>> node (or originated by this node).
>> > >>>>
>> > >>

Re: [j-nsp] Avoid transit LSPs

2019-01-25 Thread Luis Balbinot
Please let me know if you find some other approach.

The overload bit helps but in the absence of another path the RSVP FRR
mechanism will setup a bypass LSP through a node with the overload bit
set. And link coloring does not help, at least in my case.

Luis

On Fri, Jan 25, 2019 at 11:20 AM Jason Lixfeld  wrote:
>
> I’m testing a similar approach (except using the ISIS overload bit) that aims 
> to prevent the path between a pair of LSRs via the links to and through my 
> RRs from being considered as a possible transit path.  Seems to work just 
> fine in the lab.
>
> > On Jan 24, 2019, at 3:24 PM, Luis Balbinot  wrote:
> >
> > That’s a good idea. I’m not 100% sure that this will prevent the creation
> > of bypass LSPs but I’ll give it a try.
> >
> > Thanks!
> >
> > Luis
> >
> > On Thu, 24 Jan 2019 at 18:01 Colby Barth  wrote:
> >
> >> Luis-
> >>
> >> You could probably set the overload bit.
> >>
> >> -Colby
> >>
> >> On 1/24/19, 1:10 PM, "juniper-nsp on behalf of Dave Bell" <
> >> juniper-nsp-boun...@puck.nether.net on behalf of m...@geordish.org> wrote:
> >>
> >>I'm not aware of any option that will do this.
> >>
> >>The three solutions that I can think of are:
> >>Link colouring like Adam suggests
> >>An explicit path that avoids the interfaces you are worried about
> >>Set the RSVP cost for the interfaces really high
> >>
> >>Dave
> >>
> >>On Thu, 24 Jan 2019 at 17:01, Luis Balbinot 
> >> wrote:
> >>
> >>> It's a permanent thing.
> >>>
> >>> These boxes are PE routers that are not supposed to handle transit
> >>> traffic but during certain network events a few FRR bypass LSPs are
> >>> established through them because that's the only remaining path.
> >>>
> >>> Something easier like a "no-eligible-backup" toggle like the one we
> >>> have with OSPF LFA would be nice.
> >>>
> >>> Luis
> >>>
> >>> On Thu, Jan 24, 2019 at 2:53 PM 
> >> wrote:
> >>>>
> >>>>> Luis Balbinot
> >>>>> Sent: Thursday, January 24, 2019 4:45 PM
> >>>>>
> >>>>> Hi.
> >>>>>
> >>>>> How could I prevent a device from getting transit RSVP LSPs being
> >>>>> established through it? I only want it to accept ingress LSPs
> >> destined
> >>> to
> >>>> that
> >>>>> box.
> >>>>>
> >>>> If this is a permanent thing,
> >>>> You could create a colouring scheme where all links connected to
> >> this
> >>> node
> >>>> have to be avoided by all LSPs with the exception of LSPs
> >> terminated on
> >>> this
> >>>> node (or originated by this node).
> >>>>
> >>>> If this is a maintenance thing,
> >>>> There's a command that can be enabled to drain all transit LSPs
> >> out the
> >>> box.
> >>>> But, all the LSPs would need to be configured with this capability
> >> in the
> >>>> first place.
> >>>>
> >>>>
> >>>> adam
> >>>>
> >>> ___
> >>> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >>>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
> >>>
> >>___
> >>juniper-nsp mailing list juniper-nsp@puck.nether.net
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
> >>
> >>
> >>
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Avoid transit LSPs

2019-01-25 Thread Luis Balbinot
It works. The only drawback is that all metrics on the local routing
table (not only for advertised LSAs) were increased by 65535. That's a
bit annoying but works just fine. The increase is relative to the
actual metric so we'll see values above 65535.

Not sure if I'm moving to this approach or just keep using a very
large metric on those links.

Luis

On Fri, Jan 25, 2019 at 7:02 AM Mark Tinka  wrote:
>
>
>
> On 24/Jan/19 22:24, Luis Balbinot wrote:
>
> > That’s a good idea. I’m not 100% sure that this will prevent the creation
> > of bypass LSPs but I’ll give it a try.
>
> In theory, that should work, as the node will, effectively, be taken out
> of the IS-IS path (and all ensuing support services).
>
> Would be interested to hear how this works out.
>
> Mark.
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Avoid transit LSPs

2019-01-24 Thread Luis Balbinot
That’s a good idea. I’m not 100% sure that this will prevent the creation
of bypass LSPs but I’ll give it a try.

Thanks!

Luis

On Thu, 24 Jan 2019 at 18:01 Colby Barth  wrote:

> Luis-
>
> You could probably set the overload bit.
>
> -Colby
>
> On 1/24/19, 1:10 PM, "juniper-nsp on behalf of Dave Bell" <
> juniper-nsp-boun...@puck.nether.net on behalf of m...@geordish.org> wrote:
>
> I'm not aware of any option that will do this.
>
> The three solutions that I can think of are:
> Link colouring like Adam suggests
> An explicit path that avoids the interfaces you are worried about
> Set the RSVP cost for the interfaces really high
>
> Dave
>
> On Thu, 24 Jan 2019 at 17:01, Luis Balbinot 
> wrote:
>
> > It's a permanent thing.
> >
> > These boxes are PE routers that are not supposed to handle transit
> > traffic but during certain network events a few FRR bypass LSPs are
> > established through them because that's the only remaining path.
> >
> > Something easier like a "no-eligible-backup" toggle like the one we
> > have with OSPF LFA would be nice.
> >
> > Luis
> >
> > On Thu, Jan 24, 2019 at 2:53 PM 
> wrote:
> > >
> > > > Luis Balbinot
> > > > Sent: Thursday, January 24, 2019 4:45 PM
> > > >
> > > > Hi.
> > > >
> > > > How could I prevent a device from getting transit RSVP LSPs being
> > > > established through it? I only want it to accept ingress LSPs
> destined
> > to
> > > that
> > > > box.
> > > >
> > > If this is a permanent thing,
> > > You could create a colouring scheme where all links connected to
> this
> > node
> > > have to be avoided by all LSPs with the exception of LSPs
> terminated on
> > this
> > > node (or originated by this node).
> > >
> > > If this is a maintenance thing,
> > > There's a command that can be enabled to drain all transit LSPs
> out the
> > box.
> > > But, all the LSPs would need to be configured with this capability
> in the
> > > first place.
> > >
> > >
> > > adam
> > >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Avoid transit LSPs

2019-01-24 Thread Luis Balbinot
It's a permanent thing.

These boxes are PE routers that are not supposed to handle transit
traffic but during certain network events a few FRR bypass LSPs are
established through them because that's the only remaining path.

Something easier like a "no-eligible-backup" toggle like the one we
have with OSPF LFA would be nice.

Luis

On Thu, Jan 24, 2019 at 2:53 PM  wrote:
>
> > Luis Balbinot
> > Sent: Thursday, January 24, 2019 4:45 PM
> >
> > Hi.
> >
> > How could I prevent a device from getting transit RSVP LSPs being
> > established through it? I only want it to accept ingress LSPs destined to
> that
> > box.
> >
> If this is a permanent thing,
> You could create a colouring scheme where all links connected to this node
> have to be avoided by all LSPs with the exception of LSPs terminated on this
> node (or originated by this node).
>
> If this is a maintenance thing,
> There's a command that can be enabled to drain all transit LSPs out the box.
> But, all the LSPs would need to be configured with this capability in the
> first place.
>
>
> adam
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Avoid transit LSPs

2019-01-24 Thread Luis Balbinot
Hi.

How could I prevent a device from getting transit RSVP LSPs being
established through it? I only want it to accept ingress LSPs destined
to that box.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] inline-jflow monitoring

2019-01-02 Thread Luis Balbinot
>From 16.1R1 and up you should also configure the ip flow table sizes
as the default is 1024 entries for v4 if I'm not mistaken. Not sure if
this is your current issue but is something to consider as well. Also
check flex-flow-sizing as an option.

Luis

On Wed, Jan 2, 2019 at 7:51 AM A. Camci  wrote:
>
> Hi all,
>
> Does anyone have experience with GENIEATM ( 6.3.2 ) and Juniper MX480 MPCE
> Type 2 3D ( 16.1R4-S3.6).
> recently we use the inline-jflow monitoring.
>
> it works but we receive too little sampling.
> expect a 10k of sampling per second instead of 100 samples
>
>
> Border Router:
> Flow information
> FPC Slot: 0
> Flow Packets: 39566361752, Flow Bytes: 34679308997163
> Active Flows: 2478, Total Flows: 484673089
> Flows Exported: 384265866, Flow Packets Exported: 131910524
> Flows Inactive Timed Out: 103861379, Flows Active Timed Out: 380809232
> Total Flow Insert Count: 103863857
>
> IPv4 Flows:
> IPv4 Flow Packets: 39206606168, IPv4 Flow Bytes: 34296101187914
> IPv4 Active Flows: 2048, IPv4 Total Flows: 449829603
> IPv4 Flows Exported: 365283923, IPv4 Flow Packets exported: 117813878
> IPv4 Flows Inactive Timed Out: 87622231, IPv4 Flows Active Timed Out:
> 362205324
> IPv4 Flow Insert Count: 87624279
>
> IPv6 Flows:
> IPv6 Flow Packets: 359755584, IPv6 Flow Bytes: 383207809249
> IPv6 Active Flows: 430, IPv6 Total Flows: 34843486
> IPv6 Flows Exported: 18981943, IPv6 Flow Packets Exported: 14096646
> IPv6 Flows Inactive Timed Out: 16239148, IPv6 Flows Active Timed Out:
> 18603908
> IPv6 Flow Insert Count: 16239578
>
>
> GENIEATM
> Received Flows/sec: 93
>
>
> thanks
> ap
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Configuration database stuck with mgd crashing

2018-09-03 Thread Luis Balbinot
Mini heart attacks :-)

Now seriously, I’ve seen none so far.

On Mon, 3 Sep 2018 at 07:40 Sebastian Wiesinger 
wrote:

> * Phil Shafer  [2018-09-01 20:28]:
> > "commit full" helps when daemons miss config changes (which they
> > shouldn't) or if you just want to say "because I said so", but it
> > needs a functioning database, provided by MGD.  In this case, MGD
> > has corrupted the database (due to a software bug) and the assert
> > means that it's unable to do anything useful with the database since
> > it's corrupted and cannot be trusted.  "mgd -I" is the "nuke the
> > entire site from orbit" option.  It rebuilds the schema and the
> > database from scratch and reloads the entire contents.  It's the
> > only way to be sure.
>
> What operational impact does mgd -I have?
>
> Regards
>
> Sebastian
>
> --
> GPG Key: 0x93A0B9CE (F4F6 B1A3 866B 26E9 450A  9D82 58A2 D94A 93A0 B9CE)
> 'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE
> SCYTHE.
> -- Terry Pratchett, The Fifth Elephant
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Configuration database stuck with mgd crashing

2018-08-31 Thread Luis Balbinot
As root you can “mgd -I” to fix that. We also had the same issue and it’s
been fixed on 16.1R7. We never opened a JTAC case for that because we knew
the answer would be software upgrade since 16.1R7 was already out. The
cause was Netconf and it randomly occurred.

Strangely this was caused by the service release version. Earlier 16.1R6
images were fine.

On Fri, 31 Aug 2018 at 08:49 Tore Anderson  wrote:

> One of my routers (a MX240 running 16.1R6-S2.3) have gotten stuck in a
> state where it believes the configuration database has been modified,
> and if I try to configure it anyway, mgd crashes and is respawned:
>
> tore@router> configure exclusive
> error: configuration database modified
>
> tore@router> configure private
> error: shared configuration database modified
>
> tore@router> configure
> Entering configuration mode
>
> Message from syslogd@router at Aug 31 13:38:57  ...
> router mgd[20554]: ../../../../../../src/ui/lib/access/model.c:238: insist
> 'model > 0 && model <= MODEL_MAX' failed
>
> error: session failure: unexpected termination
> error: remote side unexpectedly closed connection
> Connection to router closed.
>
> At this point PID 20554 goes away from the process list. However if I
> log back in I can see a «ghost» reference to it:
>
> router> configure exclusive
> Users currently editing the configuration:
>   tore terminal pts/0 (pid 20554) on since 2018-08-31 13:38:57 CEST, idle
> 00:01:25
> error: configuration database modified
>
> "request system logout user tore all" will get rid of that reference,
> but the fundamental defective state of the configuration database
> remains.
>
> Any suggestions on how to correct this problem without requiring
> any downtime? I have of course tried "restart management", but
> that didn't help. NETCONF is impacted too.
>
> Tore
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Carrier interfaces and hold timers

2018-08-15 Thread Luis Balbinot
Sometimes carriers protect optical circuits using inexpensive optical
switches that have longer switching delays (>50ms). In these cases I'd
understand their request for a longer hold-time. But 3 seconds is a lot.

On Wed, 15 Aug 2018 at 20:02 Jonathan Call  wrote:

> Anyone have experience with hold timers?
>
>
> For the first time in my experience I have a carrier asking me to
> implement 3 second hold timers on their interface to deal with their link
> constantly flapping. They're citing this document as proof that it needs to
> be done:
>
>
> https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/hold-time-edit-interfaces.html
>
> I'm extremely dubious of this requirement since I've never had a carrier
> ask for this and our router is a pretty old MX80 which does not have a lot
> of buffer space.  But then again, maybe packet drops due to buffer overflow
> are better than carrier transitions and the resulting BGP teardown and
> rebuild.
>
> Regards,
>
> Jonathan
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LACP hashing algorithm

2018-08-09 Thread Luis Balbinot
How many flows are there in total? Is there a test appliance involved? We
had many issues with those in the past during service delivery tests.

Also I assume you are using MPCs and not DPCs and also that you are talking
about IP traffic. Please correct me if not.

Luis

On Wed, 8 Aug 2018 at 20:32 junos fordummies via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi all,
>
> This will sound like a very weird question, but has anyone seen a scenario
> whereby an MX960 with 4 x 10G links always hashes (uses) a single link out
> of the 40G bundle ? We have restarted the device, traffic flows in one
> direction only use a single link, the reverse path is all 4 links in the
> bundle. Both ends of the AE is MX960 running 16.1r2
>
> Just wondering if anyone has seen anything similar ?
>
> Many thanks
>
> JfD.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RES: QFX5100 vs ACX5048

2018-07-02 Thread Luis Balbinot
> I look into a preso I had and also this site… 
> https://packetpushers.net/juniper-enterprise-serious-campus-networking/
>
> …and I see mention of the chip for the ACX5448 possibly being Qumran-based.  
> Not sure if that helps y’all.

Yes, it is Qumran-based. 1M FIB, deep buffers, HQoS.

*Sounds* promising, but only if it comes out on a similar price tag as
the ACX5048. Or else there's no good reason not to use the MX204 for
PE like someone else mentioned.

There's also the ACX6030 on the horizon, running a "PTX" silicon with
a reduced feature set.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] PTX as a PE

2018-05-04 Thread Luis Balbinot
Hey.

Is anyone using PTX1Ks or 10Ks to terminate L2VPN/L3VPN services? I
have a very specific situation on some sites where I have to terminate
a few of those for my own management services and I don't really want
to deploy another PE just for that.

Are there any limitations besides some CoS stuff I'm already aware of?
My idea is to terminate all services on a tagged trunk port.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] mx960 to mx960 via ciena 6500 - mtu smaller in the middle

2018-04-17 Thread Luis Balbinot
> This issue is my turning up new MX960's that are simply connected together 
> with Ciena 6500 DWDM for me to have an MTU issue via DWDM is actually a 
> surprise to me.  I pretty much always envisioned wave/lamda dwdm as darn near 
> like having an actual fiber cable... no, not the case apparently... I'm 
> surprised that the ciena dwdm service in this case is actually imposing an 
> Ethernet mtu limitation !  wow, the more I think on it, the more I'm 
> surprised about it... previously my older asr9k 15-node mpls ring was via 
> fujitsu flashwave dwdm and I don't recall this occurring

DWDM is kind of agnostic in the core but for packet networks there are
OTU frames (max size = 16,320 bytes) and there is also the interface
line card that imposes the actual MTU for the Ethernet interface (9600
is a well-known value for DWDM networks). If you have, say, a 100G
transponder and a 10x10G line card then you have to frame all those
ports, they won't become light as 10G channels. Even 10G channels on a
regular 10G non-coherent network must be framed for FEC and everything
else.

If you have a colored interface directly from your router to the DWDM
system then you might skip that limitation (that depends on the
platform and linecard configuration).

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Transit composite next hops

2018-02-26 Thread Luis Balbinot
> My understanding is that "ingress" and "transit" in relation to CCNHs is
> just a very misleading nomenclature.
> If you want to go by definition CCNHs are pointers between VPN and NH label
> -and transit boxes have no knowledge of VPN labels so go figure...
>
> But there are still several levels of indirection in the next hop chain that
> transit routers can leverage in their FIBs.
>
> So what is the use case you're interested in?

None. I was only curious about transit CCNHs and how they managed to
derive information from transit traffic in order to create the CCNHs.
I questioned two SEs and neither came with a straight answer, so I
think even internally at Juniper this is some sort of black magic
restricted to a few Illuminati.

I do use ingress CCNHs, but mostly to make troubleshooting easier. I'm
far away from having tens of thousands of VPN entries that would
greatly benefit from that.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Transit composite next hops

2018-02-13 Thread Luis Balbinot
What is even more misleading is that the MX accepts the transit
configuration and commits without warnings. I issued the commit on a
standalone router but tomorrow I'm going to setup a lab with 3 routers.

Some docs mention that MPC-only chassis like the MX80 come with CNHs
configured as the default, but that's only true for ingress EVPN.

I'm still confused :-)

Luis

On Sun, 11 Feb 2018 at 19:56 Olivier Benghozi 
wrote:

> Hi Luis,
>
> I already wondered the same thing, and asked to our Juniper representative
> ; the answer was that each family supports (and only supports) its specific
> CCNH flavour:
> CCNH for ingress: MX
> CCNH for transit: PTX (I didn't asked for QFX10k).
> Olivier
>
> > On 10 feb. 2018 at 19:17, Luis Balbinot  wrote :
> >
> > I was reading about composite chained next hops and it was not clear to
> me
> > whether or not MX routers support them for transit traffic. According to
> > the doc bellow it's only a QFX10k/PTX thing:
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Transit composite next hops

2018-02-10 Thread Luis Balbinot
Hi.

I was reading about composite chained next hops and it was not clear to me
whether or not MX routers support them for transit traffic. According to
the doc bellow it's only a QFX10k/PTX thing:

https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/transit-edit-routing-options-chained.html

But there are some contradictions like this document:

https://www.juniper.net/documentation/en_US/junos/topics/concept/mpls-vpns-chained-composite-next-hops.html

Which is correct? Juniper docs on this subject, specially transit CNHs, are
very superficial. The MX Series book does not mention it and the MPLS in
the SDN Era only explains how the ingress CNHs work.

Thanks.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MPLS statistics SNMP issues

2018-01-31 Thread Luis Balbinot
Hi.

Anyone else having issues with stalled counters on the Aggr counters
from MPLS-MIB?  My LSPs are resignalled a few times a day because of
auto-bandwidth/reoptimization and after an event where the path gets
rerouted elsewhere sometimes the SNMP process stops updating those
objects until the path changes again. I'm collecting both
mplsLspInfoAggrOctets and mplsLspInfoAggrPackets. My MPLS statistics
update interval is 60 seconds and my LSPs are single path CSFP + FRR.

Sometimes if I keep refreshing the output of "show mpls lsp statistics
detail" I will see the aggregate counters going backwards between a
couple results.

I couldn't find anything on the PR database. I'm on 16.1R2 on that MX
box. I didn't open a JTAC case yet.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Experience with MX10003

2018-01-25 Thread Luis Balbinot
It's not the same chip, as Alexander pointed out. And it's not even
brand new, it's been around for 2 years now.

We are deploying our first 3 units next month and the only "bad" thing
is that you have to use Junos 17.3, so be prepared for an adventure.

But MX10003 is not better than MX960, it's different. If you need a
lot of 10G ports the MX960 is the best option, especially if you have
a mix of SFP+ interfaces (ER, CWDM, BiDi, etc).

On Thu, Jan 25, 2018 at 3:15 PM, Alexander Marhold
 wrote:
> Hi
> Regarding same chipset as mx960:
>
> RE yes x86
> PFE a clear no NO  it uses a "BRAND NEW" 3rd generation TRIO chipset with 
> 400G throughput  also built into the MX204
>
> Grabbed info from a BDM document in PPT describing both new platforms
> Regards
>
> alexander
>
> -Ursprüngliche Nachricht-
> Von: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] Im Auftrag von 
> Alain Hebert
> Gesendet: Donnerstag, 25. Januar 2018 17:47
> An: Juniper List
> Betreff: [j-nsp] Experience with MX10003
>
>  Hi,
>
>  After the bad experience with the QFX5100, now our rep is pushing for 
> MX10003 instead of MX960.
>
>  While its half the routing (10T versus 4T), at 1/2 the price, and a 
> barely 3U in space, for the same chipset (coming from the sales guy).
>
>  Anything ring thru?  Or we're going to be just another bunch of crash 
> test dummies for Juniper to test this new platform?
>
>  Thanks for your time.
>
> --
> -
> Alain Hebertaheb...@pubnix.net
> PubNIX Inc.
> 50 boul. St-Charles
> P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
> Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Syslog getting spammed by DDOS_PROTOCOL_VIOLATION_SET

2017-11-21 Thread Luis Balbinot
Sorry, I meant the opposite (i.e. the defaults are too high).

One that is specially high is the IGMP at 20k. Multicast loops on
large layer-2 fabrics (IXPs) will bring down first-gen Trios very
easily (can't say the same for the newer ones up to Eagle).

On Tue, Nov 21, 2017 at 10:19 AM, Saku Ytti  wrote:
> On 21 November 2017 at 14:12, Luis Balbinot  wrote:
>
>> The DDoS protection factory defaults are very low in some cases. The
>> Juniper MX Series book has a nice chapter on that.
>
> Do you have an example? Most of them are like 20kpps, which ismore
> than you need to congest the built-in NPU=>PFE_CPU policer. I.e. they
> are massively too large out-of-the-box.
>
> I doubt anyone has configured them to sensible values, as it would be
> hundreds of lines of ddos-protection config, as you cannot set default
> values which apply to all of them and then more-specific ones to the
> ones you care. Correct configuration needs to manually configure each
> and every one, those which you don't need, as low as you want, like
> 10pps.
>
>
> --
>   ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Syslog getting spammed by DDOS_PROTOCOL_VIOLATION_SET

2017-11-21 Thread Luis Balbinot
Most likely spoofed traffic or you don't have full tables or a default
route. A /18 will pull a lot of unwanted traffic.

The DDoS protection factory defaults are very low in some cases. The
Juniper MX Series book has a nice chapter on that.

On Tue, 21 Nov 2017 at 09:02 Karl Gerhard  wrote:

> Hello
>
> our syslog is getting spammed with the following messages:
> jddosd[12168]: %DAEMON-4-DDOS_PROTOCOL_VIOLATION_SET: Protocol
> resolve:ucast-v4 is violated at fpc 11 for 1389 times
> jddosd[12168]: %DAEMON-4-DDOS_PROTOCOL_VIOLATION_CLEAR: Protocol
> resolve:ucast-v4 has returned to normal. Violated at fpc 11 for 1389 times
>
> What is puzzling is that there is barely any traffic going through that
> machine (like 5 MBit/s). It seems like those messages are being triggered
> by random noise from the internet just by announcing a single /18.
>
> Is that normal? Is there a way to gracefully handle those messages (i.e.
> save them into another file) without losing important information?
>
> Regards
> Karl
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best practice for igp/bgp metrics

2017-10-25 Thread Luis Balbinot
Well, for the 99% of us that only do basic stuff with a TE tunnel every now
and then that works fine. For those that have extremely demanding customers
and critical services you will need  some sort of external controller to
manage all that anyway and then we are basically replaced by scripts ;-)

Luis

On Wed, 25 Oct 2017 at 18:07 Saku Ytti  wrote:

> Hey,
>
> This only matters if you are letting system assign metric
> automatically based on bandwidth. Whole notion of preferring
> interfaces with most bandwidth is fundamentally broken. If you are
> using this design, you might as well assign same number to every
> interface and use strict hop count.
>
> On 25 October 2017 at 22:41, Luis Balbinot  wrote:
> > Never underestimate your reference-bandwidth!
> >
> > We recently set all our routers to 1000g (1 Tbps) and it was not a
> > trivial task. And now I feel like I'm going to regret that in a couple
> > years. Even if you work with smaller circuits, having larger numbers
> > will give you more range to play around.
> >
> > Luis
> >
> > On Tue, Oct 24, 2017 at 8:50 AM, Alexander Dube 
> wrote:
> >> Hello,
> >>
> >> we're redesigning our backbone with multiple datacenters and pops
> currently and looking for a best practice or a recommendation for
> configuring the metrics.
> >> What we have for now is a full meshed backbone with underlaying isis.
> IBGP exports routes without any metric. LSP are in loose mode and are using
> isis metric for path calculation.
> >>
> >> Do you have a recommendation for metrics/te ( isis and bgp ) to have
> some values like path lengh ( kilometers ), bandwidth, maybe latency, etc
> inside of path calculation?
> >>
> >> Kind regards
> >> Alex
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
>   ++ytti
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best practice for igp/bgp metrics

2017-10-25 Thread Luis Balbinot
Never underestimate your reference-bandwidth!

We recently set all our routers to 1000g (1 Tbps) and it was not a
trivial task. And now I feel like I'm going to regret that in a couple
years. Even if you work with smaller circuits, having larger numbers
will give you more range to play around.

Luis

On Tue, Oct 24, 2017 at 8:50 AM, Alexander Dube  wrote:
> Hello,
>
> we're redesigning our backbone with multiple datacenters and pops currently 
> and looking for a best practice or a recommendation for configuring the 
> metrics.
> What we have for now is a full meshed backbone with underlaying isis. IBGP 
> exports routes without any metric. LSP are in loose mode and are using isis 
> metric for path calculation.
>
> Do you have a recommendation for metrics/te ( isis and bgp ) to have some 
> values like path lengh ( kilometers ), bandwidth, maybe latency, etc inside 
> of path calculation?
>
> Kind regards
> Alex
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Routing Engine upgrade

2017-10-20 Thread Luis Balbinot
If possible try aiming for a full system restart, it will be less
painful and very straightforward (you'll need at least a one hour
window). Your PFEs will go through a warm reboot anyway if you upgrade
the software on the new REs. But please confirm that with your SE.

On Wed, Oct 18, 2017 at 2:47 PM, craig washington
 wrote:
> Hello all,
>
>
> Kind of a general question but just wanted to double check any gotchas.
>
>
> We are upgrading some of our routers from RE-S-2000 to RE-S-1800x4.
>
> This should just be putting the new on in the backup, letting it sync and 
> upgrade software as needed and then doing switch over, then performing same 
> step on the primary.
>
> Is there anything that I am missing, are there any odd issues with the 
> RE-S-2000 that could cause a crash of some sort?
>
>
> Thanks
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-20 Thread Luis Balbinot
Even on newer Junos if you don't enable the indirect-next-hop toggle
you'll still see krt entries with 0x2 flags.

On Tue, Apr 18, 2017 at 6:30 PM, Dragan Jovicic  wrote:
> As mentioned on mx trio indirect-nh is enabled and can't be disabled.
> You could check with > show krt indirect-next-hop protocol-next-hop
> commands (0x3 flag should mean it is enabled).
> However this was not the case in older Junos versions where
> indirect-next-hop was in fact not enabled and had to be enabled even on mx
> mpc (it escapes me when was this, pre-13 or so).
>
> If your uplink fails, with indirect-nh change is almost instantaneous,
> given your BGP next-hop is unchanged, as only one pointer needs to be
> rewritten (or you have equal cost uplinks...). However you still need
> composite-next-hop feature for L3VPN labeled traffic and this is NOT
> enabled by default (might be important if you run lots of routes in vrf)...
>
> If your BGP next-hop changes and you have routes in rib (add-paths,
> advertise-external, multiple RRs), and you have them installed in FT
> (pre- or post- 15.1), you still rely on failure detection of upstream BGP
> router or upstream link (even slower, but you could put upstream links in
> IGP).
>
> There's also egress-protection for labeled traffic..
>
> Before we implemented bgp pic/add-paths, we used multiple RR and iBGP mesh
> in certain parts and spread BGP partial feeds from multiple upstream
> routers to at least minimize time to update FIB, as none of this required
> any upgrade/maintenance.
>
> If you find your FIB update time is terrible, bgp pic edge will definately
> help..
>
> BR,
>
>
> -Dragan
>
> ccie/jncie
>
>
>
>
>
> On Tue, Apr 18, 2017 at 10:07 PM, Vincent Bernat  wrote:
>
>>  ❦ 18 avril 2017 21:51 +0200, Raphael Mazelier  :
>>
>> >> Is this the case for chassis MX104 and 80? Is your recommendation to run
>> >> with indirect-next-hop on them as well?
>> >>
>> >
>> > Correct me if I'm wrong but I think this is the default on all the MX
>> > since a long time. There as no downside afaik.
>>
>> Documentation says:
>>
>> > By default, the Junos Trio Modular Port Concentrator (MPC) chipset on
>> > MX Series routers is enabled with indirectly connected next hops, and
>> > this cannot be disabled using the no-indirect-next-hop statement.
>> --
>> Harp not on that string.
>> -- William Shakespeare, "Henry VI"
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Juniper PTX1000

2016-12-17 Thread Luis Balbinot
In fact looking at JNPR market cap, I'm worried of long term
survivability of JNPR right now.


I agree with you Saku. All they talk about is SDN and software solutions.

We are trying to get a quote on PTX1Ks for a long time and they keep
pushing back and want more details on our network, as if they don't want to
sell it. Does anyone know the current market value for it?

Would you consider any ACX/QFX as a replacement P router where there's no
need for 100G ports?

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RE-S-X6-64G & ISSU?

2016-11-22 Thread Luis Balbinot
Depending on your arrangement with Juniper the price for a backup RE
is negligible compared to the rest of the chassis (we got them for
free several times). There's really no reason to leave a blank RE slot
considering you have redundant SCBs.

Luis

On Tue, Nov 22, 2016 at 2:19 PM, Michael Hare  wrote:
> Agree with Mark, if you count loss of redundancy as a high priority issue 
> find the funds to purchase dual RE even on dual chassis designs.
>
> We made this engineering mistake; initially saved money with a single RE 
> design with dual PE MX104s.  We've had some NAND corruption RE failures that 
> are undetectable until the fan gets hit.  The MX104 recovery process requires 
> physical access (would love to be proven wrong).  Some of these chassis are 
> distant and/or not convenient to access physically.  We are walking back and 
> installing dual RE everywhere.
>
> -Michael
>
>> -Original Message-
>> From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of
>> Mark Tinka
>> Sent: Tuesday, November 22, 2016 12:00 AM
>> To: Aaron ; 'Sebastian Becker' 
>> Cc: juniper-nsp@puck.nether.net; 'Clarke Morledge' 
>> Subject: Re: [j-nsp] RE-S-X6-64G & ISSU?
>>
>>
>>
>> On 14/Nov/16 17:04, Aaron wrote:
>>
>> > Have y'all ever thought through the benefits of having dual RE in one
>> > chassis compared to 2 chassis with 1 RE in each ?
>> >
>> > Like if I'm thinking about getting a MX480 with dual RE... what about
>> > instead, getting dual MX240 with 1 RE in each ?  ...and possibly Virtual
>> > Chassis'ing the dual MX240's as one virtual router
>> >
>> > Chassis diversity seems nice...then whatever would connect in that 
>> > location,
>> > can be redundantly connected to both MX240's.
>>
>> Firstly, unless you're tight for space, I'd not spend money on an MX240.
>> MX480 should be the bare minimum. Line cards are hungry for space.
>>
>> We used to run single control planes in core switches, and have 2 core
>> switches per PoP. This was because those switches only handled Ethernet
>> traffic, no IP/MPLS.
>>
>> I'd be hesitant to run any kind of routing device on a single control
>> plane unless it was designed as such. Then again, control planes are not
>> that much more expensive in current-generation core switches, so we are
>> now kitting them fully out from that perspective as well.
>>
>> Mark.
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP NMS support of Junos VLAN MIBs

2016-11-22 Thread Luis Balbinot
Check out Observium. I don't know about this specific MIB, but it correctly
detects vlan memberships for me.

On Dec 9, 2015 14:32, "Chuck Anderson"  wrote:

> Has anyone tried to use or implement polling of the Q-BRIDGE-MIB on
> any Juniper products, using either commercial or open source NMS
> software or custom in-house software?  What has been your experience
> of the Juniper support of those SNMP products to correctly report
> Port/VLAN memberships and VLAN/MAC FDB information?
>
> Juniper EX-series (at least EX2200,3200,4200) 12.x and earlier has a
> working Q-BRIDGE-MIB (dot1qVlanStaticEgressPorts) and JUNIPER-VLAN-MIB
> (jnxExVlan).  Because Q-BRIDGE-MIB refers only to internal VLAN
> indexes, you need to use both MIBs to get Port/VLAN mappings including
> the 802.1Q VLAN tag ID (jnxExVlanTag).  This means custom software, or
> an NMS vendor willing to implement the Juniper Enterprise MIBs.
>
> All other Juniper Junos platforms only have Q-BRIDGE-MIB, but it is
> broken (doesn't follow RFC 4363 standard PortList definition, instead
> storing port indexes as ASCII-encoded, comma separated values),
> apparently for a very long time.  So again, you need custom software
> or an NMS vendor willing to implement the broken Juniper version of
> Q-BRIDGE-MIB (along with detecting which implementation is needed on
> any particular device).  This hasn't been a problem for us and in fact
> went unnoticed, because we never cared to poll VLAN information from
> our MX routers, only EX switches.
>
> But now EX-series (and QFX-series) 13.x and newer with ELS have
> dropped the Enterprise JUNIPER-VLAN-MIB (a good thing to not require
> Enterprise MIBs to get the VLAN tag ID) and have adopted the broken
> Q-BRIDGE-MIB that all the other Junos platforms have been using (a
> very bad thing).  I'm pushing to have Juniper fix this, but their
> concern is that it may break SNMP software that has been assuming the
> broken Q-BRIDGE-MIB implementation for all these years.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RES: MX10 - BGP and LDP sessions flapping without a reason

2016-11-08 Thread Luis Balbinot
An IGMP flood can easily bring the latest RE to it's knees. The default
DDoS protection settings for IGMP is way too high (20kpps) on MX boxes and
you should tweak it.

On Nov 8, 2016 20:28, "Alexandre Guimaraes" 
wrote:

> Niall,
> Thank you for your help, I will review carefully your
> consideration and apply them in a Maintenance Window, and keep looking...
>
> But today, perhaps I find the problem, that is related to a large
> broadcast domain network of one customer. I saw some  tfeb information
> about a large udp/igmp flooding incoming in one interface, o saw it before,
> but don’t imagine that behavior was bad for the MX. Move the customer to a
> MX480, those flaps stops.
>
> Again, thank you.
>
>
> Alexandre
>
> -Mensagem original-
> De: Niall Donaghy [mailto:niall.dona...@geant.org]
> Enviada em: terça-feira, 8 de novembro de 2016 19:41
> Para: Alexandre Guimaraes ;
> juniper-nsp@puck.nether.net
> Assunto: RE: MX10 - BGP and LDP sessions flapping without a reason
>
> Have you ran 'show krt queue' and 'show krt state' during times of outage?
>
> The control plane hardware on MX5 (or whatever you license it as) is puny
> - a Freescale e500v2 CPU @ 1.33GHz, 2.4DMIPS/MHz, therefore
> 3192 DMIPS (single core).
> We deployed a couple of MX80s in our network and had problems with route
> convergence, BGP stability, etc. in a full-mesh iBGP
> scenario.
> As L3 devices we found them unusable. As L2 devices they were fine.
> In particular, MX5/MX80 with MS-MIC is a bad combination - this would
> eventually peg the CPU and drop sessions, or crash the TFEB and
> core dump.
>
> Other issues we encountered were a cosd memory leak that we never got to
> the bottom of.
>
> Junos also runs some redundant processes by default which are not even
> applicable on MX5/80.
> You can disable these and gain some extra RAM:
>
> [edit system]
>  +   processes {
>  +   cfm disable;
>  +   send disable;
>  +   ethernet-connectivity-fault-management disable;
>  +   ddos-protection disable;
>  +   ppp disable;
>  +   sonet-aps disable;
>  +   link-management disable;
>  +   iccp-service disable;
>  +   }
>
> See also https://prsearch.juniper.net/InfoCenter/index?page=
> prcontent&id=PR1099523 - rpd.record file size/rotation issue if Junos <
> 14.1R6.
>
> In our particular use case we removed the L3 terminations from the box and
> instead used l2circuits to haul them to nearby MX960
> routers which could handle the control plane load.
> As L2 termination devices they were fine, but I would be very reluctant to
> touch MX5/80 if I could avoid it!
>
> Kind regards,
> Niall
>
> > -Original Message-
> > From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On
> Behalf Of Alexandre Guimaraes
> > Sent: 08 November 2016 13:32
> > To: juniper-nsp@puck.nether.net
> > Subject: [j-nsp] MX10 - BGP and LDP sessions flapping without a reason
> >
> > Hi all,
> >
> >   Did anyone experiencing something like this on MX10 without an
> obvious
> > reason- there has been no changes in network topology, all the
> interfaces are
> > up, no configuration changes has been done. There isn't anything useful
> in the
> > "show log messages" output. If I check the updates sent by BGP
> > peers, there is not excessive flood by none of the peers, BGP sessions
> flaps
> > randomly.
> >
> >   Anyone seen such behavior before where RPD has high CPU
> utilization without a
> > clear reason? Is it somehow possible to trace the updates going to RPD in
> > order to understand better, what exactly RDP is doing at the time when
> the CPU
> > utilization is high?
> >
> >   Jtac already working in on it to try to find the issue, but until
> now, I
> > think that they had no clue of whats going on.
> >
> >
> >
> > Model: mx10-t
> > Junos: 14.1R7.4
> > Hardware inventory:
> > Item Version  Part number  Serial number FRU model number
> > Midplane REV 08   711-038213     CHAS-MX10-T-S
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Load balancing errors on 15.1R4

2016-10-21 Thread Luis Balbinot
Case is open, but nothing yet. Could be related to PR1164101, but that's
not Juniper's official position.

Next week we are upgrading a lab MX480 to 16.2 and see if it persists.

Luis

On Oct 21, 2016 14:54, "Dragan Jovicic"  wrote:

> Hi,
>
> Do you have any update on this? Have you opened a case for this maybe?
>
> Best
> Dragan
>
> On Tue, Oct 18, 2016 at 4:14 PM, Luis Balbinot 
> wrote:
>
>> Hey.
>>
>> Is anyone else having issues with load-balancing on 15.1R4? I'm
>> getting these FPC errors in multiple boxes:
>>
>> fpc0 LUCHIP(3) RMC 2  Uninitialized EDMEM[0x3ce333] Read
>> (0x6db6db6d6db6db6d)
>> fpc0 LUCHIP(3) PPE_2 Errors sync xtxn error
>> fpc0 LUCHIP(3) PPE_15 Errors sync xtxn error
>> fpc0 PPE Sync XTXN Err Trap:  Count 23064982, PC 376, 0x0376:
>> call_table_launch_nh
>>
>> They are all MX960s with MPC 16x10G cards. This impacts on traffic to
>> random destinations (route is OK in FIB and RIB but traffic is
>> blackholed). If I disable per-packet lb everything works perfectly.
>>
>> This issue appears when we have flaps that force route
>> recalculation/installation.
>>
>> I have a similar box running 12.3R3.4 that is just fine.
>>
>> Luis
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Load balancing errors on 15.1R4

2016-10-18 Thread Luis Balbinot
Hey.

Is anyone else having issues with load-balancing on 15.1R4? I'm
getting these FPC errors in multiple boxes:

fpc0 LUCHIP(3) RMC 2  Uninitialized EDMEM[0x3ce333] Read (0x6db6db6d6db6db6d)
fpc0 LUCHIP(3) PPE_2 Errors sync xtxn error
fpc0 LUCHIP(3) PPE_15 Errors sync xtxn error
fpc0 PPE Sync XTXN Err Trap:  Count 23064982, PC 376, 0x0376:
call_table_launch_nh

They are all MX960s with MPC 16x10G cards. This impacts on traffic to
random destinations (route is OK in FIB and RIB but traffic is
blackholed). If I disable per-packet lb everything works perfectly.

This issue appears when we have flaps that force route
recalculation/installation.

I have a similar box running 12.3R3.4 that is just fine.

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Communities on l2vpn instances

2016-09-27 Thread Luis Balbinot
Thanks Krasi, that's how we do it, but this relies on specific target
extended communities to sort out which LSPs are installed for each
l2vpn instance. We want to use another common community to group
instances and avoid having giant match clauses.

Luis

On Tue, Sep 27, 2016 at 9:16 AM, Krasimir Avramski  wrote:
> Hi,
>
> You can follow the vpls example (l2vpn/l3vpn/l2circuit traffic mapping is
> the same):
> http://www.juniper.net/documentation/en_US/junos15.1/topics/usage-guidelines/vpns-mapping-vpls-traffic-to-specific-lsps.html
>
> install-nexthop  lsp lsp-name: Use the "strict" option to enable
> strict mode, which checks to see if any of the LSP next hops specified in
> the policy are up. If none of the specified LSP next hops are up, the policy
> installs the discard next hop.
>
> Best Regards,
> Krasi
>
>
> On 26 September 2016 at 16:43, Luis Balbinot  wrote:
>>
>> Hi.
>>
>> It's possible to set communities at the "protocol l2vpn" level in a
>> l2vpn routing-instance at three different places:
>>
>> set interface xxx community yyy
>> set site xxx community yyy
>> set site xxx interface yyy community zzz
>>
>> But these don't seem to change anything. Documentation on these
>> commands is pretty much inexistent. Does anyone know how/if they work?
>>
>> My idea is to set communities to aggregate l2vpn instances and control
>> which groups take different LSPs. All communities set from a VRF
>> export policy are only sent on the BGP sessions and are not installed
>> at the local l2vpn tables. Is there a way to set multiple communities
>> locally on a l2vpn routing-instance?
>>
>> Luis
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Communities on l2vpn instances

2016-09-26 Thread Luis Balbinot
Hi.

It's possible to set communities at the "protocol l2vpn" level in a
l2vpn routing-instance at three different places:

set interface xxx community yyy
set site xxx community yyy
set site xxx interface yyy community zzz

But these don't seem to change anything. Documentation on these
commands is pretty much inexistent. Does anyone know how/if they work?

My idea is to set communities to aggregate l2vpn instances and control
which groups take different LSPs. All communities set from a VRF
export policy are only sent on the BGP sessions and are not installed
at the local l2vpn tables. Is there a way to set multiple communities
locally on a l2vpn routing-instance?

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best Place to Buy Used Juniper

2016-04-01 Thread Luis Balbinot
I got a quote from them a while ago, it's not worth it. The MPC we
quoted is available to us new from Juniper for $35k, used from Hula
for $10k and they asked $50k. Their prices float according to the
relationship you have with Juniper.

On Mon, Mar 28, 2016 at 1:49 PM, Colton Conor  wrote:
> Graham,
>
> I have never seen this  http://junipercpo.net/ website until now. Are they
> really any different than the rest of the used Juniper guys? How does their
> pricing compare to what you see on eBay for example?
>
> On Sat, Mar 26, 2016 at 5:44 PM, Graham Brown 
> wrote:
>
>> Hi Colton,
>>
>> The official used, Juniper-certified gear is available here:
>> http://junipercpo.net/
>>
>> HTH,
>> Graham
>>
>> Graham Brown
>> Twitter - @mountainrescuer 
>> LinkedIn 
>>
>> On 27 March 2016 at 06:10, Colton Conor  wrote:
>>
>>> Where is the best place to buy used Juniper gear?
>>> ___
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>
>>
>>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Core network design for an ISP

2016-03-24 Thread Luis Balbinot
A good practice on MX480s would be to keep upstream and downstream ports at
separate MPCs if possible. Depending on your config the standard 256M
RLDRAM from some cards might be an issue in the not so near future. I'm not
sure how much RLDRAM those NG cards have though.

I don't see any advantages of running full tables on a virtual-router,
specially if you have a 64GB RE.

For iBGP consider multiple loopback addresses for different families. I'd
do v4 and v6 (6PE with MPLS) with one family and inet-vpn, l2vpn, etc on
another one. Even with newer REs a full table is taking quite some time to
come up.

For IGP keep a flat area, no need to segment.

If starting from scratch, look at BGP-LU. Running an MX core is expensive
in terms of cost per port. You could run a much cheaper MPLS-only core in
the future with 100Gbps interfaces at only a fraction of the cost of what a
bunch of MPC4 cards would cost.

For IXs I'd recommend a separate routing-instance. This will help you avoid
stuff like someone defaulting to you and transit deviations.

Luis
On Mar 24, 2016 20:57, "Matthew Crocker"  wrote:

>
>
> Hello,
>
> What is the current best practice for carrying full tables in MX series
> routers?   I have 3 new MX480s coming soon and will use them to rebuild my
> core network (currently a mix of MX240 & MX80 routers).  MPC-NG (w/ 20x1g &
> 10x10g MICS )& RE-S-X6-64G-BB.
>
> I’m running MPLS now and have full tables in the default route instance.
>  Does it make more sense (i.e. more secure core) to run full tables in a
> separate virtual-router?  I’ve been doing this small ISP thing for 20+
> years, Cisco before, Juniper now, I’ve always bashed my way through.
>
> Looking for a book, NANOG presentation or guide on what is current best
> practice with state of the art gear.
>
> MPLS?  BGP? IS-IS? LDP? etc.
>
> The network is a triangle  (A -> B -> C -> A),  MX480 at each POP,  10g
> connections between POPs,  10g connections to IX & upstreams.  Most
> customers are fed redundantly from A & B
>
> Thanks
>
> -Matt
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] setting named communities on static routes

2016-01-28 Thread Luis Balbinot
+1M

Or a policy rule if it makes sense. Like IOS that allows you to apply a
route-map to every network statement under the BGP configuration.
On Jan 28, 2016 18:56, "Chuck Anderson"  wrote:

> On Thu, Jan 28, 2016 at 02:30:52PM -0500, Jeff Haas wrote:
> >
> > > On Jan 28, 2016, at 2:16 PM, Chuck Anderson  wrote:
> > >
> > > Does anyone know why Junos doesn't accept named communities for static
> > > routes?  This doesn't work:
> > >
> > > set routing-options static route 192.0.2.0/24 community TEST
> > > set policy-options community TEST members 65000:100
> > >
> > > Instead we are forced to put the value directly:
> > >
> > > set routing-options static route 192.0.2.0/24 community 65000:100
> > > set policy-options community TEST members 65000:100
> > >
> > > Can Juniper please fix this?  Thanks :-)
> >
> > Speaking unofficially, these consistency warts drive us just as crazy as
> they do you.
> >
> > But in order to preserve our own sanity, and not bring down the wrath of
> everyone else's operations who use the existing behavior, it's not likely
> to get changed.
>
> Well, please just add a new option "community-name" then, like there
> is for the operational mode command.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RTBH

2016-01-15 Thread Luis Balbinot
And remember that if you plan to accept prefixes from external
neighbors and send to the black hole route you need
"accept-remote-nexthop".

On Fri, Jan 15, 2016 at 3:20 PM, Johan Borch  wrote:
> Thanks
>
> Setting route preference helped :)
>
> Johan
>
> On Fri, Jan 15, 2016 at 12:23 AM, Charles van Niman 
> wrote:
>
>> What route preference is your IGP route, and what IGP? I assume your
>> discard/static has a route preference of 5? Also, do you mind pasting
>> the show route extensive output? Is your static discard route in the
>> same routing-instance/VRF as the BGP prefix?
>>
>> /Charles
>>
>> On Thu, Jan 14, 2016 at 3:10 PM, Johan Borch 
>> wrote:
>> > Hi!
>> >
>> > I have implemented RTBH in my small network of 8 routers. DFZ is running
>> in
>> > a L3VPN and each router has an multihop ibgp-session with my RTBH-router
>> > and it works, but I have one thing that annoys me.
>> >
>> > If I announce an offending IP to be black holed, only one of the routers
>> > will point to the discard route. The other 7 will see the announced route
>> > via BGP från the one that got it first I guess and send the traffic to
>> that
>> > one where is is discarded. If I do show extensive on the route it is
>> prefer
>> > because of IGP. I can't figure out how to get each router to prefer the
>> > discard localy. If I do local pref on one router the other 7 will send
>> the
>> > traffic there instead.
>> >
>> > Every router has
>> >
>> >  route a.b.c.d/32 {
>> > discard;
>> > install;
>> > }
>> >
>> > And from sending RTBH router, they are announced with next-hop a.b.c.d.
>> >
>> > Idéas? :)
>> >
>> > Regards
>> > Johan
>> > ___
>> > juniper-nsp mailing list juniper-nsp@puck.nether.net
>> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Cisco ME3600 migration to something with more 10 gig ports

2015-07-14 Thread Luis Balbinot
Take a look at the EX4550. Just pay attention on the number of routes it
supports and see if that suits you. It's not a core router, but neither is
the ME3600.
On Jul 13, 2015 11:54 AM, "Aaron"  wrote:

> Hi everyone,
>
>
>
> I'm needing more 10 gig ports in my CO's for purposes of upgrading my FTTH
> OLT shelves with 10 gig.  I currently use Cisco ME3600's and do a lot of
> core ospf, and MP-iBGP over that for MPLS L2VPN's (eline, elan, etree) and
> L3VPN's (VPNv4 and testing VPNv6)
>
>
>
> I'm thinking about Cisco ASR920's for (4) 10 gig ports and several (1) gig
> ports.  Would this be good ?
>
>
>
> What are some comparable Juniper products that would fit here ?  Is Juniper
> better in that area ?
>
>
>
> Aaron
>
>
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper hardware purchasing and delivery time

2015-06-26 Thread Luis Balbinot
Usually you'll get longer delays at the end of quarters due to a
higher demand (great deals and price cuts).

Someone from Juniper told me that large companies (with large orders)
are to blame for some of these longer delays.

Luis

On Fri, Jun 26, 2015 at 7:35 AM, Paul Stewart  wrote:
> It can depend on a few factors and what the partner's arrangement is with 
> Juniper.
>
> Depending on the partner's arrangement also determines where they have to 
> acquire the equipment from (ie. Distribution or from Juniper directly).
>
> The timeline you mention isn't entirely out of line.  Sometimes the delays 
> are also with the reseller themselves and internal processes.
>
> My experience has been typically a month for Juniper MX but for SRX/QFX stuff 
> it's usually a week.   This is assuming that certain warehouses involved has 
> stock - if not then SRX/QFX can take several weeks.  That was when I worked 
> for a partner.
>
> Now I'm on the other side of the table and the current partner we buy from 
> takes a month just to process the paperwork internally ... very painful ...
>
> Paul
>
>
> -Original Message-
> From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of 
> james list
> Sent: Friday, June 26, 2015 3:00 AM
> To: juniper-nsp@puck.nether.net
> Subject: [j-nsp] Juniper hardware purchasing and delivery time
>
> I did an agreement with a Juniper partner to buy some hardware (MX, SRX and
> QFX) and it tooks 45 solar days to deliver the production boxes and 4 months 
> for the spare (for maintenance) boxes (the spares remain in the partner 
> warehouse…).
>
>
> I’m wondering if on the Juniper partner side there are two different ways to 
> order the devices with different SLA on the delivery time on Juniper side.
>
>
> Any partner could switch on the light ?
>
>
> Greetings
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Juniper EX4550 load balancing of MPLS traffic

2015-06-03 Thread Luis Balbinot
Thanks Mark.

That's for a LAG, right? Not ECMP traffic.

Luis

On Wed, Jun 3, 2015 at 3:18 AM, Mark Tinka  wrote:
>
>
> On 2/Jun/15 20:54, Luis Balbinot wrote:
>> Hi.
>>
>> If I have two EX4550s acting as P routers and they are connected by a
>> LAG how will the MPLS traffic be load balanced? How deep will the
>> hashing algorithm look into the packets?
>>
>> Juniper says these EXs cannot do MPLS and ECMP. Is this still true?
>
> We used the EX4550's as core switches back in 2012 - 2014, before
> migrating to another platform.
>
> They are able to hash Layer 2 traffic that is primarily MPLS-based. This
> was a major requirement for us because I've had bad experience with the
> EX4200 acting as a core switch where Ethernet payload is MPLS traffic.
>
> I've never ran MPLS on any EX box, but the EX4550 can certainly load
> balance Ethernet traffic if the payload is MPLS. It will look deep
> enough and hash against the payload that MPLS is carrying.
>
> Just don't try this on the EX4200. The EX4200 only supports hashing
> against IP and Ethernet traffic. Not sure what the newer EX switches can do.
>
> Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Juniper EX4550 load balancing of MPLS traffic

2015-06-02 Thread Luis Balbinot
Hi.

If I have two EX4550s acting as P routers and they are connected by a
LAG how will the MPLS traffic be load balanced? How deep will the
hashing algorithm look into the packets?

Juniper says these EXs cannot do MPLS and ECMP. Is this still true?

Luis
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp