Re: [j-nsp] ACX5448 & ACX710 - Update!

2020-07-29 Thread Shamen Snyder
Heads up on the ACX5448. There is a major LDP bug in the recommend code
19.3R2-S3.

LDP hellos are punted to the RE In queue rx-unknown-mc instead of
rxq-l3-nc-hi.

A major shift in multicast on our network dropped LDP neighbors.

The issue doesn’t happen in 20.2R1 if you find it’s stable (I haven’t). I
believe the PR is PR1503469 and should be going into 19.3R3.




On Wed, Jul 29, 2020 at 2:20 PM Baldur Norddahl  wrote:

> I am planning to deploy ACX710 with maybe 20 units (which for us is a huge
> number). We would have ordered DC in any case, so that is a non issue. We
> will have them at CO buildings were DC is what you get and maybe in the
> future in road side cabinets, where DC is the easy way to have some battery
> backup.
>
> I am also going to get a few ACX5448 for our datacentre locations. I am
> still considering getting some AC to DC powersupplies for the ACX710
> because the cost saving is considerable. It is not like finding AC to DC
> devices is hard - every laptop comes with one (yea I know too little
> voltage).
>
> Our purpose is to replace our MPLS core with new gear that has deep buffers
> and better support for traffic engineering etc. These will be P and PE
> routers mostly doing L2VPN. We will have a 100G ring topology of ACX710
> devices moving MPLS packets and terminating L2VPN.
>
> Seems to be a perfect fit to me. I am not interested in the older ACX
> devices which lacks buffers and is probably not much better than the gear
> we want to replace.
>
> Regards
>
> Baldur
>
>
> ons. 29. jul. 2020 16.25 skrev Mark Tinka :
>
> >
> >
> > On 29/Jul/20 15:49, Eric Van Tol wrote:
> > > We ran into this, too. We signed up to beta test at the beginning of
> > this year and nowhere, not even in discussions with our SE (who also
> wasn't
> > told by Juniper), was it mentioned it was a DC-only device. Imagine my
> > surprise when I received the box and it was DC only. Such a
> disappointment.
> >
> > The messaging we got from them earlier in the year about trying out
> > their new Metro-E box was that we would be happy with it, considering
> > that every Metro-E solution they've thrown at us since 2008 has fallen
> > flat, splat!
> >
> > Come game-time, even our own SE was blindsided by this DC-only support
> > on the ACX710. Proper show-stopper.
> >
> > At any rate, the story is that they should be pushing out some new
> > ACX7xxx boxes from next year, which should have AC support (to you
> > psych. majors: more for the general public, and not the custom-built
> > ACX710).
> >
> > I'm not sure I can be that patient, so I'm sniffing at Nokia's new
> > Metro-E product line. The problem is so far, as with Juniper and Cisco,
> > they've gone down the Broadcom route (some boxes shipping with Qumran,
> > others with Jericho 2, and on-paper, they are already failing some of
> > our forwarding requirements.
> >
> > It's not easy...
> >
> > Mark.
> >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX5448 & ACX710 - Update!

2020-07-29 Thread Shamen Snyder
The Juniper Bolan architecture is suppose to have an AC variant.

 Hardened (-40C to 65C), compact (445m x 221mm x 250mm) form factor –
suitable for cabinets in pre-aggregation network layer
• 2 Routing Engine slots, 1:1 redundant control and forwarding/switching
plane
• 320Gb/s and 2.4 Tb/s RP Variants; Full FIB with 2.4Tb/s RP – 1.5M FIB
• Flexibility of 7 (DC versions) or 6 (AC versions) line card slots
• 8x1GE/10GE
• 8 x 10/25GE
• 2x40GE/100GE
• 4x40/100GE (C-Temp)

I haven’t been following it much, but may be worth poking your SE on.

On Wed, Jul 29, 2020 at 9:43 AM Mark Tinka  wrote:

> So an update on this thread...
>
> Juniper went ahead and made the ACX710 a DC-only box. So if you are an
> AC house, you're in deep doo-doo (which is us).
>
> DC, for large scale deployment in the Metro? Makes zero sense to me.
>
> Apparently, no way around this; which, to me, smells of the box being
> built for some larger operator (like mobile), who primarily have DC
> plants. And that's it - no other options for anyone else.
>
> Oh, these vendors...
>
> I haven't yet seen an ACX710 outside of a PDF, but deep scouring on the
> Internet led me to this:
>
>
>
> https://portal.nca.org.gh/search_type_approval_view_details.php?typeApproveDetailID=2244
>
> Some kind of type approval with National Communications Authority of Ghana.
>
> Mark.
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] qfx5100, not possible to add a scheduler-map to an interface

2020-03-26 Thread Shamen Snyder
Most QFXs (and the EX4600) use ETS style CoS and scheduling.

Give this tech library document a read. I think it will answer your
questions.

https://www.juniper.net/documentation/en_US/junos/topics/example/cos-hierarchical-port-scheduling-ets-configuring.html

On Thu, Mar 26, 2020 at 3:50 AM niklas rehnberg 
wrote:

> Hi Experts,
> I have notis that it is not possible to add a scheduler-map to an
> interface.
> example: set class-of-service interfaces xe-0/0/0 scheduler-map test
>
> root@qfx5100# set class-of-service interfaces xe-0/0/0 ?
> Possible completions:
> + apply-groups Groups from which to inherit configuration data
> + apply-groups-except  Don't inherit configuration data from these groups
> > classifiers  Classifiers applied to incoming packets
>   congestion-notification-profile  Congestion notification profile for the
> interface
> > exclude-queue-overhead-bytes  Exclude the overhead bytes from the queue
> statistics
>   forwarding-class Forwarding class assigned to incoming packets
> > forwarding-class-set  Map forwarding class sets to output traffic control
> profile
> > logical-interface-aggregate-statistics  Logical interface aggregate queue
> statistics
> > rewrite-rulesRewrite rules applied to outgoing packets
> > unit Logical interface unit (or wildcard)
>
> So if I want change the bandwidth allocation, should I use the default
> scheduler-map?
>
> Example?
>
> Thanks Filmar
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX5448 & ACX710

2020-01-23 Thread Shamen Snyder
I have been following the ACX 710 for a while now. We have a use case in
rural markets where we need a dense 10G hardened 1 RU box.

Looks like a promising box, hope the price is right. If not we may have to
jump to Cisco ASR920s

4 100/40G (can be channelized to 4x25G or 4x10G) interfaces, 24 1/10G
interfaces. Broadcom QAX chipset. 320Gbps of throughput. 3GB buffer.

On Tue, Jan 21, 2020 at 11:38 AM Mark Tinka  wrote:

> Hi all.
>
> My Juniper SE is pressuring me to test the ACX boxes per subject.
>
> These are shipping with Jericoh 2c and Qumran 2c chip sets.
>
> For anyone that has deployed these, are you happy, particularly if you
> have previous Trio experience?
>
> As some of you know, I generally shy away from merchant silicon,
> especially from traditional equipment vendors such as Juniper and Cisco.
>
> All feedback is much appreciated. Thanks.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Link establishment issues with 1Gbps SX/LX SFPs on QFX5110

2019-06-25 Thread Shamen Snyder
What version of code are you using? When I worked at Juniper I delt with a
ISP that had this issue and I believe 17.3R3 did not have the issue at all.

There was an issue with BCM and the Juniper code. You can manually force
the port up in the BCM shell (not that is a viable long term solution).

FPC0( vty)# set dcbcm bcmshell "ps 45"

HW (unit 0)
 ena/speed/ link autoSTP  lrn
 inter   max  loop
   port  linkduplex scan neg?   state   pause  discrd ops
face frame  back
   ge0( 45)  down1G  FD   HW  YesBlock  None   FA
 SGMII  1518

FPC0( vty)# set dcbcm bcmshell "port 45 an=1 e=t"

HW (unit 0)

FPC0( vty)# set dcbcm bcmshell "ps 45"

HW (unit 0)
 ena/speed/ link autoSTP  lrn
 inter   max  loop
   port  linkduplex scan neg?   state   pause  discrd ops
face frame  back
   ge0( 45)  !ena1G  FD None  No Block  NoneD
 SGMII  9412




On Tue, Jun 25, 2019 at 1:15 AM Timothy Creswick 
wrote:

> > Can confirm I've seen exactly the same issue on QFX5110 18.4R1.8 with
> the only way I can get fs.com LX 1G
> > transceivers working being turning off auto-neg.
>
> Thanks. We've been looking at the Broadcom tools on the QFX and note some
> apparently quite major differences in the way that the interfaces negotiate
> across versions.
>
> In the older satellite verions (e.g. 2.0R1.1), GE interfaces appear to
> have the correct auto neg state set on the chipset. This seems to coincide
> with it using GMII between the SFP and the board. This reflects the way the
> ports are configured in Junos:
>
> ena/speed/ link autoSTP  lrn  inter
>  max  loop
>   port  linkduplex scan neg?   state   pause  discrd ops   face
> frame  back
>xe4  up  1G  FD   HW  Yes  Forward  NoneF   GMII
> 1604
>xe6  up  1G  FD   HW  No   Forward  NoneF   GMII
> 1522
>   xe10  up  1G  FD   HW  Yes  Forward  NoneF   GMII
> 1526
>   xe11  up  1G  FD   HW  Yes  Forward  NoneF   GMII
> 1526
>   xe12  up  1G  FD   HW  Yes  Forward  NoneF   GMII
> 1526
>
> On newer versions (e.g. 3.5R2.2), GE interfaces are always shown as
> auto-neg disabled on the chipset:
>
>  ena/speed/ link autoSTP  lrn
> inter   max  loop
>port  linkduplex scan neg?   state   pause  discrd ops
>  face frame  back
>ge0(  3)  up  1G  FD   HW  No   Forward  NoneF
> SGMII  1518
>ge1(  4)  up  1G  FD   HW  No   Forward  NoneF
> SGMII  1518
>ge2(  6)  down1G  FD   HW  No   Forward  NoneF
> SGMII  1518
>ge3( 12)  !ena1G  FD   HW  No   Forward  NoneF
> SGMII  1518
>ge4( 13)  up  1G  FD   HW  No   Forward  NoneF
> SGMII  1526
>ge5( 15)  up  1G  FD   HW  No   Forward  NoneF
> SGMII  1526
>
> They will show as up here and in Junos but not on the remote device.
>
> Forcefully setting the interface to '[gig]ether-options auto-negotiate'
> does not change the bcm setting.
>
> If you forcefully change the bcm setting (i.e. tell the chipset to
> auto-neg), the port goes down both in the bcm shell and in Junos.
>
> Interestingly, some of these ports work, others do not. We haven't been
> able to identify a pattern and JTAC are still looking.
>
> We also can't explain why these ports indicate they are SGMII.
>
> At the behest of JTAC we put these on 3.5R1-S4.0. This has the
> characteristic that in the following example we have a MM (port 1) and SM
> (port 3) transceiver. Both are Flexoptix correctly coded (we've tried
> others). Port 1 gets link successfully, Port 3 does not. Note that now the
> working port is back to GMII:
>
>  ena/speed/ link autoSTP  lrn
> inter   max  loop
>port  linkduplex scan neg?   state   pause  discrd ops
>  face frame  back
>ge0(  1)  up  1G  FD   HW  Yes  Forward  NoneF
>  GMII  1518
>ge1(  3)  down   10M  FD   HW  Yes  Forward  NoneF
> SGMII  1518
>
> Can anyone shed any more light on this?
>
> Regards,
>
> Tim
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JNCIE-SP question

2017-12-11 Thread Shamen Snyder
I spent 2 hours a day reading and 8 hours of lab (sometimes more) on
weekends for about a year and a half. All that time I put into it paid off
as I passed on my first attempt. With that being said I paid out of pocket
for everything and really didn't want to waste the money on a failed
attempt.





On Mon, Dec 11, 2017 at 10:39 AM, Aaron Gould  wrote:

> I accomplished JNCIP-SP last week, and have a question for the JNCIE-SP
> folks out there.  To those of you who have done the SP track, how much
> time/effort do you recommend needs to go into preparing for JNCIE-SP ?
>
>
>
> My progression has been.
>
>
>
> ~3 months of study/prep - JNCIA-JUNOS
>
> ~6 months of study/prep - JNCIS-SP
>
> ~9 months of study/prep - JNCIP-SP
>
>
>
> .so how much more time/preparation will go into my preparing for a
> legitimate attempt at the JNCIE-SP lab ?
>
>
>
> -Aaron
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] traceroute in mpls vpn's not showing P hops

2017-08-25 Thread Shamen Snyder
I think there is a bug with logical tunnels with how it handles TTL. Try
physical ports for your logical systems.

On Fri, Aug 25, 2017 at 8:00 AM, Aaron Gould  wrote:

> This is crazy… I was shutting down some lt interfaces trying to see if I
> could get traffic to follow that same path via where I saw the p hops
> reported on traceroute, and I suddenly saw it start working again where it
> will now show the p hops on traceroute, but a different path this time.
> I’m encouraged that p hops are seen on traceroute even via a different path
> then my previous email showed.  But weird that it works sometimes and not
> other times.  Not sure why yet.
>
>
>
>
>
> r1@lab-mx104:r1> traceroute 1.1.10.2 wait 1
>
> traceroute to 1.1.10.2 (1.1.10.2), 30 hops max, 40 byte packets
>
> 1  1.1.0.2 (1.1.0.2)  0.537 ms  0.452 ms  0.359 ms
>
> 2  * * *
>
> 3  * * *
>
> 4  * * *
>
> 5  1.1.10.1 (1.1.10.1)  0.628 ms  0.629 ms  0.527 ms
>
> 6  1.1.10.2 (1.1.10.2)  0.580 ms  0.620 ms  0.542 ms
>
>
>
> r1@lab-mx104:r1> traceroute 1.1.10.2 wait 1
>
> traceroute to 1.1.10.2 (1.1.10.2), 30 hops max, 40 byte packets
>
> 1  1.1.0.2 (1.1.0.2)  0.493 ms  0.471 ms  0.366 ms
>
> 2  * * *
>
> 3  * * *
>
> 4  * * *
>
> 5  1.1.10.1 (1.1.10.1)  0.686 ms  0.613 ms  0.523 ms
>
> 6  1.1.10.2 (1.1.10.2)  0.600 ms  0.587 ms  0.555 ms
>
>
>
> r1@lab-mx104:r1> ping 1.1.10.2
>
> PING 1.1.10.2 (1.1.10.2): 56 data bytes
>
> 64 bytes from 1.1.10.2: icmp_seq=0 ttl=59 time=0.705 ms
>
> 64 bytes from 1.1.10.2: icmp_seq=1 ttl=59 time=0.769 ms
>
> 64 bytes from 1.1.10.2: icmp_seq=2 ttl=59 time=0.645 ms
>
> 64 bytes from 1.1.10.2: icmp_seq=3 ttl=59 time=0.638 ms
>
> 64 bytes from 1.1.10.2: icmp_seq=4 ttl=59 time=0.615 ms
>
> ^C
>
> --- 1.1.10.2 ping statistics ---
>
> 5 packets transmitted, 5 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.615/0.674/0.769/0.056 ms
>
>
>
> r1@lab-mx104:r1> traceroute 1.1.10.2 wait 1
>
> traceroute to 1.1.10.2 (1.1.10.2), 30 hops max, 40 byte packets
>
> 1  1.1.0.2 (1.1.0.2)  23.530 ms  0.403 ms  0.364 ms
>
> 2  1.1.7.2 (1.1.7.2)  0.679 ms  0.586 ms  1.884 ms
>
>  MPLS Label=300208 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=1 S=1
>
> 3  1.1.11.2 (1.1.11.2)  1.481 ms  0.656 ms  0.627 ms
>
>  MPLS Label=300272 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=2 S=1
>
> 4  1.1.6.1 (1.1.6.1)  0.628 ms  0.745 ms  0.640 ms
>
>  MPLS Label=300464 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=3 S=1
>
> 5  1.1.2.2 (1.1.2.2)  0.704 ms  0.648 ms  0.633 ms
>
>  MPLS Label=300400 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=4 S=1
>
> 6  1.1.3.2 (1.1.3.2)  0.662 ms  0.673 ms  0.683 ms
>
>  MPLS Label=300528 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=5 S=1
>
> 7  1.1.10.1 (1.1.10.1)  0.662 ms  0.627 ms  0.669 ms
>
> 8  1.1.10.2 (1.1.10.2)  0.713 ms  0.673 ms  0.639 ms
>
>
>
> r1@lab-mx104:r1>
>
>
>
> here’s some traces from the ingress PE r2…  the first trace is from
> ingress/egress pe’s r2 and r8 loopbacks… the second trace is from the pe/ce
> interface on r2/r8 facing their respective ce r1/r9…
>
>
>
>
>
> [edit]
>
> r2@lab-mx104:r2# run traceroute 1.1.255.8 source 1.1.255.2
>
> traceroute to 1.1.255.8 (1.1.255.8) from 1.1.255.2, 30 hops max, 40 byte
> packets
>
> 1  1.1.7.2 (1.1.7.2)  0.734 ms  0.495 ms  0.381 ms
>
> 2  1.1.11.2 (1.1.11.2)  0.397 ms  0.532 ms  0.375 ms
>
> 3  1.1.255.8 (1.1.255.8)  0.564 ms  0.788 ms  0.535 ms
>
>
>
> [edit]
>
> r2@lab-mx104:r2# run traceroute 1.1.10.1 source 1.1.0.2 routing-instance
> test
>
> traceroute to 1.1.10.1 (1.1.10.1) from 1.1.0.2, 30 hops max, 40 byte
> packets
>
> 1  1.1.7.2 (1.1.7.2)  0.722 ms  0.695 ms  0.596 ms
>
>  MPLS Label=300208 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=1 S=1
>
> 2  1.1.11.2 (1.1.11.2)  0.588 ms  0.560 ms  0.528 ms
>
>  MPLS Label=300272 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=2 S=1
>
> 3  1.1.6.1 (1.1.6.1)  0.617 ms  0.744 ms  0.623 ms
>
>  MPLS Label=300464 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=3 S=1
>
> 4  1.1.2.2 (1.1.2.2)  0.696 ms  0.718 ms  0.629 ms
>
>  MPLS Label=300400 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=4 S=1
>
> 5  1.1.3.2 (1.1.3.2)  0.647 ms  0.652 ms  0.628 ms
>
>  MPLS Label=300528 CoS=0 TTL=1 S=0
>
>  MPLS Label=16 CoS=0 TTL=5 S=1
>
> 6  1.1.10.1 (1.1.10.1)  0.798 ms  0.712 ms  0.667 ms
>
>
>
> [edit]
>
> r2@lab-mx104:r2#
>
>
>
> - Aaron Gould
>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] traceroute in mpls vpn's not showing P hops

2017-08-24 Thread Shamen Snyder
Are you setting icmp-tunneling on all the routers? It be helpful to see
your configuration.

On Thu, Aug 24, 2017 at 7:44 AM, Aaron Gould  wrote:

> I removed all rsvp and label-switched-path configs for a moment… then with
> only mpls l3vpn configs, I turned on the icmp-tunneling.  Still don’t see p
> hops on traceroute
>
>
>
> r1@lab-mx104:r1> traceroute 1.1.10.2 wait 1
>
> traceroute to 1.1.10.2 (1.1.10.2), 30 hops max, 40 byte packets
>
> 1  1.1.0.2 (1.1.0.2)  0.469 ms  0.401 ms  0.366 ms
>
> 2  * * *
>
> 3  * * *
>
> 4  1.1.10.1 (1.1.10.1)  0.648 ms  0.583 ms  0.500 ms
>
> 5  1.1.10.2 (1.1.10.2)  0.647 ms  0.653 ms  0.541 ms
>
>
>
> On ingress pe, I did this “set protocols mpls no-propagate-ttl”
>
>
>
> r1@lab-mx104:r1> traceroute 1.1.10.2 wait 1
>
> traceroute to 1.1.10.2 (1.1.10.2), 30 hops max, 40 byte packets
>
> 1  1.1.0.2 (1.1.0.2)  0.458 ms  0.402 ms  0.414 ms
>
> 2  1.1.10.1 (1.1.10.1)  0.497 ms  0.413 ms  0.455 ms
>
> 3  1.1.10.2 (1.1.10.2)  0.646 ms  0.641 ms  0.669 ms
>
>
>
> This is all inside one mx104 running about 9 logical-systems.
>
>
>
> -Aaron
>
>
>
>
>
>
>
>
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] can i see bgp looped AS PATH prefixes on the receiving router

2017-06-16 Thread Shamen Snyder
I've only ever used the *loops* knob for hub and spoke VPNs. There may be
some other use cases, but I'm not aware of them.


On Fri, Jun 16, 2017 at 9:34 PM, Aaron Gould  wrote:

> Here’s what happened when I added “loops 1”  …it seemed to just allow
> those previously loop-detected prefixes, to now be allowed into the rib.
> True ?  is this safe ?  I mean isn’t this what AS PATH loop detection is
> supposed to prevent.  Then again, maybe if you know a good reason to do
> this, then I guess you just have to be careful and know what you’re going
> huh
>
>
>
> ** before…
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route receive-protocol bgp 10.0.2.9
>
>
>
> inet.0: 35 destinations, 39 routes (35 active, 0 holddown, 0 hidden)
>
>   Prefix  Nexthop  MED LclprefAS path
>
> * 192.168.50.0/24 10.0.2.9 100(65001)
> I
>
>   192.168.60.0/24 10.0.9.6 100(65001)
> I
>
>   192.168.70.0/24 10.0.9.7 100(65001)
> I
>
>
>
> ...looking at one of them...
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route protocol bgp 192.168.10.0
>
>
>
> inet.0: 35 destinations, 39 routes (35 active, 0 holddown, 0 hidden)
>
> + = Active Route, - = Last Active, * = Both
>
>
>
> 192.168.10.0/24*[BGP/170] 08:07:39, localpref 101, from 10.0.6.1
>
>   AS path: I, validation-state: unverified
>
> > to 10.0.4.10 via lt-0/1/0.42
>
>
>
> ** after…
>
>
>
> [edit]
>
> r4@lab-mx104:r4# set protocols bgp group my-cbgp family inet unicast
> loops 1
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route receive-protocol bgp 10.0.2.9
>
>
>
> inet.0: 35 destinations, 43 routes (35 active, 0 holddown, 0 hidden)
>
>   Prefix  Nexthop  MED LclprefAS path
>
> * 192.168.10.0/24 10.0.6.1 101(65001
> 65000) I
>
> * 192.168.20.0/24 10.0.6.2 100(65001
> 65000) I
>
>   192.168.30.0/24 10.0.2.2 100(65001
> 65000) I
>
> * 192.168.50.0/24 10.0.2.9 100(65001)
> I
>
>   192.168.60.0/24 10.0.9.6 100(65001)
> I
>
>   192.168.70.0/24 10.0.9.7 100(65001)
> I
>
> * 192.168.100.0/2410.0.6.1 101(65001
> 65000) I
>
>
>
> ...looking at one of them...
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route protocol bgp 192.168.10.0
>
>
>
> inet.0: 35 destinations, 43 routes (35 active, 0 holddown, 0 hidden)
>
> + = Active Route, - = Last Active, * = Both
>
>
>
> 192.168.10.0/24*[BGP/170] 00:00:13, localpref 101, from 10.0.2.9
>
>   AS path: (65001 65000) I, validation-state:
> unverified
>
> > to 10.0.4.10 via lt-0/1/0.42
>
> [BGP/170] 08:08:35, localpref 101, from 10.0.6.1
>
>   AS path: I, validation-state: unverified
>
> > to 10.0.4.10 via lt-0/1/0.42
>
>
>
>
>
> - Aaron Gould
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] can i see bgp looped AS PATH prefixes on the receiving router

2017-06-16 Thread Shamen Snyder
Originally you did need bgp keep all to see routes that failed as-path loop
checks. I thought that changed in one of the version releases however, that
doesn't appear to be the case. Its the same on eBGP and cBGP. You need keep
all to see it.

I did this from 12.1.


r...@mpr0.rdu.lab# run show route hidden extensive

inet.0: 17 destinations, 19 routes (17 active, 0 holddown, 2 hidden)
1.1.1.0/24 (3 entries, 1 announced)
TSI:
KRT in-kernel 1.1.1.0/24 -> {}
Page 0 idx 0 Type 1 val 9365148
Nexthop: Self
Localpref: 100
AS path: [65000] I
Communities:
Path 1.1.1.0 Vector len 4.  Val: 0
 BGP
Next hop type: Unusable
Address: 0x8f28f64
Next-hop reference count: 4
State: 
Inactive reason: Unusable path
Local AS: 65000 Peer AS: 65002
Age: 1:00
Task: BGP_65002.10.10.10.5+179
AS path: (65002 65000) I (Looped: 65000)
Router ID: 10.0.0.2
Indirect next hops: 1
Protocol next hop: 10.10.10.4
Indirect next hop: 0 -
Indirect path forwarding next hops: 0
Next hop type: Unusable




On Fri, Jun 16, 2017 at 3:04 PM, Aaron Gould  wrote:

> I wonder if confederations are creating the difference with what y’all are
> expecting to see ?  In other words, perhaps confederations don’t show loop
> AS’s as hidden.  Just guessing as I learn…
>
>
>
> Here’s both sides, of what is sent from r5 and what is received on r4 ?
>
>
>
> [edit]
>
> r4@lab-mx104:r4# show protocols bgp group my-cbgp | display set
>
> set logical-systems r4 protocols bgp group my-cbgp type external
>
> set logical-systems r4 protocols bgp group my-cbgp traceoptions file r4-cgp
>
> set logical-systems r4 protocols bgp group my-cbgp traceoptions flag state
> detail
>
> set logical-systems r4 protocols bgp group my-cbgp traceoptions flag route
> detail
>
> set logical-systems r4 protocols bgp group my-cbgp traceoptions flag
> update detail
>
> set logical-systems r4 protocols bgp group my-cbgp export my-ibgp
>
> set logical-systems r4 protocols bgp group my-cbgp neighbor 10.0.2.9
> peer-as 65001
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route receive-protocol bgp 10.0.2.9
>
>
>
> inet.0: 35 destinations, 39 routes (35 active, 0 holddown, 0 hidden)
>
>   Prefix  Nexthop  MED LclprefAS path
>
> * 192.168.50.0/24 10.0.2.9 100(65001)
> I
>
>   192.168.60.0/24 10.0.9.6 100(65001)
> I
>
>   192.168.70.0/24 10.0.9.7 100(65001)
> I
>
>
>
> [edit]
>
>
>
>
>
>
>
> [edit]
>
> r5@lab-mx104:r5# show protocols bgp group my-cbgp | display set
>
> set logical-systems r5 protocols bgp group my-cbgp type external
>
> set logical-systems r5 protocols bgp group my-cbgp export my-ibgp
>
> set logical-systems r5 protocols bgp group my-cbgp neighbor 10.0.2.2
> peer-as 65000
>
> set logical-systems r5 protocols bgp group my-cbgp neighbor 10.0.2.10
> peer-as 65000
>
>
>
> [edit]
>
> r5@lab-mx104:r5# run show route advertising-protocol bgp 10.0.2.10
>
>
>
> inet.0: 38 destinations, 43 routes (38 active, 0 holddown, 0 hidden)
>
>   Prefix  Nexthop  MED LclprefAS path
>
> * 192.168.10.0/24 10.0.6.1 101(65000)
> I
>
> * 192.168.20.0/24 10.0.6.2 100(65000)
> I
>
> * 192.168.30.0/24 10.0.2.2 100(65000)
> I
>
> * 192.168.50.0/24 Self 100I
>
> * 192.168.60.0/24 10.0.9.6 100I
>
> * 192.168.70.0/24 10.0.9.7 100I
>
> * 192.168.100.0/2410.0.6.1 101(65000)
> I
>
>
>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] can i see bgp looped AS PATH prefixes on the receiving router

2017-06-16 Thread Shamen Snyder
Looped as-path routes should show up as hidden. Do you have *bgp keep none *
configured?

On Fri, Jun 16, 2017 at 8:44 AM, Aaron Gould  wrote:

> Nothing hidden.
>
>
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route receive-protocol bgp 10.0.2.9
>
>
>
> inet.0: 33 destinations, 33 routes (33 active, 0 holddown, 0 hidden)
>
>   Prefix  Nexthop  MED LclprefAS path
>
> * 192.168.50.0/24 10.0.2.9 100(65001)
> I
>
> * 192.168.60.0/24 10.0.9.6 100(65001)
> I
>
> * 192.168.70.0/24 10.0.9.7 100(65001)
> I
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route receive-protocol bgp 10.0.2.9 hidden
>
>
>
> inet.0: 33 destinations, 33 routes (33 active, 0 holddown, 0 hidden)
>
>
>
> [edit]
>
> r4@lab-mx104:r4# run show route hidden
>
>
>
> inet.0: 33 destinations, 33 routes (33 active, 0 holddown, 0 hidden)
>
>
>
> - Aaron Gould
>
>
>
>
>
> From: Tomasz Mikołajek [mailto:tmikola...@gmail.com]
> Sent: Friday, June 16, 2017 7:17 AM
> To: Aaron Gould 
> Subject: Re: [j-nsp] can i see bgp looped AS PATH prefixes on the
> receiving router
>
>
>
> Hello.
>
> Trzy Show route hidden
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] Service Provider Shaping vs Policing

2017-04-19 Thread Shamen Snyder
I'm curious as to what other Juniper service providers are doing for
their internet customers. I assume most probably shape or police at the
customer CPE or as close as they can to it.

We are currently in a position where we terminate internet customers in
the POP that we purchase bulk transit in several collocations around the
United States. Then carry customer internet traffic back to the IP
termination via our MPLS network.

Shaping is broken when configured on a LAG (see KB22921). Which
depending on how many interfaces you have in a LAG a customer would need
that many flows to see all of their bandwidth. So I assume most
providers are using policing instead.

What type of policers are you guys using? Are you soft policing and
letting CoS PLP/drop profiles in the core handle the congestion or doing
hard policing?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Fwd: Ethernet OAM Issues

2014-09-04 Thread Shamen Snyder
Good morning,


I’ve been trying to get Ethernet OAM CFM to work properly and the lo0 inet
filter is causing the layer 2 protocol to stay stuck in the ‘start’ state.


If I deactivate the lo0 filter and add the source IP to the trusted prefix
list commit then activate the filter OAM stays up.

If I remove the source IP from trusted prefix list OAM goes into a failed
state.

If I have the lo0 filter activated and source IP in the trusted prefix list
and try to bring OAM up it stays stuck in a start state.


The EX4200 at the customer location has no lo0 filter.


So it seems the initial connection is still being dropped by the lo0 filter.


Has anyone ran into this problem and now how to get around it? Deactivating
our lo0 filter is not a solution as this is a core MPLS router.



MX5 configuration:


root@mpr0 show configuration protocols oam

ethernet {

connectivity-fault-management {

traceoptions {

file oam;

flag all;

}

action-profile link-down-take-down {

event {

interface-status-tlv lower-layer-down;

port-status-tlv blocked;

adjacency-loss;

}

action {

interface-down;

}

}

maintenance-domain provider-md {

level 5;

maintenance-association customer-ma {

continuity-check {

interval 1s;

}

mep 101 {

interface ae0.2792;

direction down;

auto-discovery;

remote-mep 100 {

action-profile link-down-take-down;

}

}

}

}

}

}



root@mpr0 show configuration firewall filter lo0

term allow-ntp {

from {

source-address {

x.x.x.x/32;

y.y.y.y/32;

}

protocol udp;

port ntp;

}

then accept;

}

term allow {

from {

source-prefix-list {

trusted;

}

}

then accept;

}

term allow-tcp {

from {

protocol tcp;

tcp-established;

}

then accept;

}

term allow-icmp {

from {

protocol icmp;

}

then {

policer small-bw-limit;

log;

accept;

}

}

term allow-tracert {

from {

protocol udp;

destination-port 33434-33523;

}

then accept;

}

term allow-bgp {

from {

source-prefix-list {

bgp-peers;

}

protocol tcp;

destination-port bgp;

}

then accept;

}

term allow-snmp {

from {

source-prefix-list {

snmp-nms;

trusted;

}

protocol udp;

destination-port snmp;

}

then accept;

}

term allow-mcast {

from {

protocol pim;

}

then accept;

}

term deny-all {

then {

discard;

}

}





EX4200 configuration:


root@ms0 show configuration protocols oam

ethernet {

connectivity-fault-management {

action-profile link-down-take-down {

event {

adjacency-loss;

}

action {

interface-down;

}

}

maintenance-domain customer-md {

level 5;

maintenance-association customer-ma {

continuity-check {

interval 1s;

}

mep 100 {

interface ge-0/1/0.0 vlan-id 2792;

direction down;

auto-discovery;

remote-mep 101 {

action-profile link-down-take-down;

}

}

}

}

}

}
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] Ethernet OAM Issues

2014-09-03 Thread Shamen Snyder
Good morning,


I’ve been trying to get Ethernet OAM CFM to work properly and the lo0 inet
filter is causing the layer 2 protocol to stay stuck in the ‘start’ state.


If I deactivate the lo0 filter and add the source IP to the trusted prefix
list commit then activate the filter OAM stays up.

If I remove the source IP from trusted prefix list OAM goes into a failed
state.

If I have the lo0 filter activated and source IP in the trusted prefix list
and try to bring OAM up it stays stuck in a start state.


The EX4200 at the customer location has no lo0 filter.


So it seems the initial connection is still being dropped by the lo0 filter.


Has anyone ran into this problem and now how to get around it? Deactivating
our lo0 filter is not a solution as this is a core MPLS router.



MX5 configuration:


root@mpr0 show configuration protocols oam

ethernet {

connectivity-fault-management {

traceoptions {

file oam;

flag all;

}

action-profile link-down-take-down {

event {

interface-status-tlv lower-layer-down;

port-status-tlv blocked;

adjacency-loss;

}

action {

interface-down;

}

}

maintenance-domain provider-md {

level 5;

maintenance-association customer-ma {

continuity-check {

interval 1s;

}

mep 101 {

interface ae0.2792;

direction down;

auto-discovery;

remote-mep 100 {

action-profile link-down-take-down;

}

}

}

}

}

}



root@mpr0 show configuration firewall filter lo0

term allow-ntp {

from {

source-address {

x.x.x.x/32;

y.y.y.y/32;

}

protocol udp;

port ntp;

}

then accept;

}

term allow {

from {

source-prefix-list {

trusted;

}

}

then accept;

}

term allow-tcp {

from {

protocol tcp;

tcp-established;

}

then accept;

}

term allow-icmp {

from {

protocol icmp;

}

then {

policer small-bw-limit;

log;

accept;

}

}

term allow-tracert {

from {

protocol udp;

destination-port 33434-33523;

}

then accept;

}

term allow-bgp {

from {

source-prefix-list {

bgp-peers;

}

protocol tcp;

destination-port bgp;

}

then accept;

}

term allow-snmp {

from {

source-prefix-list {

snmp-nms;

trusted;

}

protocol udp;

destination-port snmp;

}

then accept;

}

term allow-mcast {

from {

protocol pim;

}

then accept;

}

term deny-all {

then {

discard;

}

}





EX4200 configuration:


root@ms0 show configuration protocols oam

ethernet {

connectivity-fault-management {

action-profile link-down-take-down {

event {

adjacency-loss;

}

action {

interface-down;

}

}

maintenance-domain customer-md {

level 5;

maintenance-association customer-ma {

continuity-check {

interval 1s;

}

mep 100 {

interface ge-0/1/0.0 vlan-id 2792;

direction down;

auto-discovery;

remote-mep 101 {

action-profile link-down-take-down;

}

}

}

}

}

}
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp