[j-nsp] Auto-bandwidth Accuracy

2010-05-23 Thread Richard A Steenbergen
Recently I've been noticing some really odd auto-bandwidth behavior on
several different routers, and I'm wondering if anybody knows if this is
a known bug or if I'm doing something really wrong in my autobw config.

Specifically, I'm seeing many cases where the rsvp reservations on an 
interface are vastly higher than the actual traffic going over it. I 
started comparing autobw measures bandwidth value vs rsvp resv bandwidth 
across my LSPs (with an op script :P), and noticed that a large number 
of LSPs that were ingress on Juniper routers were consistently reserving 
more bandwidth than they were actually passing.

To troubleshoot this further, I picked one LSP at random and followed it 
through the course of an entire adjust-interval. I also watched it in 
monitor label-switched-path, and followed the bandwidth recorded for 
it in the mpls stats file. The mpls stats file pretty consistently 
recorded a bandwidth of around 900Mbps. Some samples were up to 1G, some 
were down in the 800Mb's, but nothing was significantly outside this 
range:

xxx.-xxx.-BRONZE-1 20442770 pkt21800398308 Byte  91864 pps 
97826023 Bps Util 43.47%
xxx.-xxx.-BRONZE-1 25748678 pkt27500224526 Byte  89930 pps 
96607224 Bps Util 42.93%
xxx.-xxx.-BRONZE-1 31309754 pkt33516047564 Byte  95880 pps 
103721086 Bps Util 46.09%
xxx.-xxx.-BRONZE-1 36934965 pkt39389728013 Byte  90729 pps 
94736781 Bps Util 42.10%
xxx.-xxx.-BRONZE-1 41323164 pkt44001156442 Byte  86043 pps 
90420165 Bps Util 40.18%
xxx.-xxx.-BRONZE-1 46229207 pkt49166295068 Byte  84586 pps 
89054114 Bps Util 39.58%
xxx.-xxx.-BRONZE-1 51764861 pkt55023074603 Byte  92260 pps 
97612992 Bps Util 43.38%
xxx.-xxx.-BRONZE-1 57091315 pkt60691783494 Byte  90278 pps 
96079811 Bps Util 42.70%
xxx.-xxx.-BRONZE-1 62138489 pkt66009079194 Byte  90128 pps 
94951708 Bps Util 42.20%
xxx.-xxx.-BRONZE-1 67697838 pkt72030553645 Byte  92655 pps 
100357907 Bps Util 44.60%
xxx.-xxx.-BRONZE-1 73083250 pkt77870203449 Byte  89756 pps 
97327496 Bps Util 43.25%
xxx.-xxx.-BRONZE-1 78530642 pkt83799427998 Byte  90789 pps 
98820409 Bps Util 43.91%
xxx.-xxx.-BRONZE-1 84166327 pkt89767404007 Byte  85389 pps 
90423878 Bps Util 40.18%
xxx.-xxx.-BRONZE-1 89990750 pkt96052103366 Byte  85653 pps 
92422049 Bps Util 41.07%
xxx.-xxx.-BRONZE-1 94808838 pkt   101299936674 Byte  87601 pps 
95415151 Bps Util 42.40%
xxx.-xxx.-BRONZE-1100044983 pkt   106918990604 Byte  83113 pps 
89191332 Bps Util 39.64%
xxx.-xxx.-BRONZE-1104706036 pkt   111928263183 Byte  86315 pps 
92764307 Bps Util 41.22%
xxx.-xxx.-BRONZE-1109664547 pkt   117256403183 Byte  81287 pps 
87346557 Bps Util 38.82%
xxx.-xxx.-BRONZE-1115001230 pkt   123065374817 Byte  84709 pps 
92205898 Bps Util 40.98%
xxx.-xxx.-BRONZE-1120197917 pkt   128761293505 Byte  85191 pps 
93375716 Bps Util 41.50%
xxx.-xxx.-BRONZE-1124790487 pkt   133783111501 Byte  79182 pps 
86583068 Bps Util 38.48%
xxx.-xxx.-BRONZE-1129450091 pkt   138908431043 Byte  84720 pps 
93187628 Bps Util 41.41%
xxx.-xxx.-BRONZE-1134048794 pkt   143940227806 Byte  82119 pps 
89853513 Bps Util 39.93%
xxx.-xxx.-BRONZE-1138900130 pkt   149257983679 Byte  80855 pps 
88629264 Bps Util 39.39%
xxx.-xxx.-BRONZE-1143665805 pkt   154447812210 Byte  79427 pps 
86497142 Bps Util 38.44%
xxx.-xxx.-BRONZE-1148501587 pkt   159667032930 Byte  80596 pps 
86987012 Bps Util 38.66%
xxx.-xxx.-BRONZE-1153971586 pkt   165650360517 Byte  78142 pps 
85476108 Bps Util 37.99%

Next, I watched the output of show mpls lsp name BLAH detail, looking
at the autobw measured amount (Max AvgBW) and the reserved bandwidth. 
I'm using a stats interval of 60 seconds, an adjust-interval of 900
seconds, and in this instance no overflow samples occured. After the
previous adjust-interval completes the measured bw is reset to 0, and
then starts updating again after the first 60 sec stats interval is up. 
For around the first 700 seconds the Max AvgBW was pretty close to what
one would expect (around 900Mbps), then it jumped to ~1.6Gbps for no
reason that I can determine. The stats file for this LSP (above) never
showed anything above 1.0G, and a monitor of the lsp never showed any
sample thatever got anywhere near that high (let alone enough to make an
entire 60 sec sample interval report that high). At the end of the 900
seconds, te 1.6G value is what was signaled to RSVP, and the cycle
repeated itself. I watched it for several more cycles, and saw the same
behavior happening over and over again, with measured values of 1.8G
plus, while the stats file continued to show an average of around 
800-900Mbps and no sample that ever went above 1G.

This particular router is 

Re: [j-nsp] OSPF Area Problem with Directly Connected Vlan not being Advertised

2010-05-23 Thread Jose Madrid
Thanks Chuck, that did it.

On Sat, May 22, 2010 at 3:13 PM, Chuck Anderson c...@wpi.edu wrote:
 On Sat, May 22, 2010 at 02:09:47PM -0400, Jose Madrid wrote:
 I am setting up some new devices at a physically separate data center
 and am seeing some weird OSPF behavior.  In order to get the default
 route injected, I setup this new DC as a stub area (area 0.0.0.1) and

  For some reason, although this vlan is turned on and configured, the
 /29 in use wont be advertised into OSPF.  In order to make it work, I
 was forced to add the specific sub-interface for vlan 50 into my OSPF
 config and was wondering if you guys could tell me how/why I can
 overcome this behavior.  The reason this is a problem is that now

 Area 0.0.0.1 is a Stub area, but using policies to export
 Direct/Static into OSPF causes those routes to appear as AS Externals
 and turns the router into an ASBR.  Stub areas can't have AS Externals
 (Type 5 LSAs).  The reason it works when you put the VLAN interface
 into protocols ospf is because then OSPF treats it as an internal
 route (Type 1/2/3 LSAs) rather than an external route (Type 5).

 You can solve this by reconfiguring area 0.0.0.1 to be an NSSA area.
 The external routes will then be advertised via Type 7 LSAs, and the
 ABR will convert them to Type 5 LSAs injected into the backbone area.

 On the ABR(s):

 area 0.0.0.1 {
    nssa {
        default-lsa default-metric 40;
        summaries;
    }

 On the interior router (J2320):

 area 0.0.0.1 {
    nssa;
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp




-- 
It has to start somewhere, it has to start sometime.  What better
place than here? What better time than now?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] GRE tunnel - inbound traffic drops

2010-05-23 Thread Volker D. Pallas

Hi,

i'm trying to set up a simple gre-tunnel from an SRX-100 running JUNOS 
10.1R2.8 to a remote linux host.

I verified using tcpdump on both sides:
-pings from linux to junos get sent but are never received.(no sign of 
them in tcpdump of pp0.0/gre.0)
-pings from junos to linux arrive (also visible in tcpdump of pp0.0) and 
are replied to, but the reply does not reach junos


This sounds like a problem with security zones or policies, but I have 
tried about *everything* and it never worked - not even with extreme 
measures. Temporarily allowed all inbound traffic for pp0.0, put all 
involved interfaces into the 'trust'-zone and so on.


this is my basic tunnel-config:
# set interfaces gre unit 0 tunnel source 87.79.237.76
# set interfaces gre unit 0 tunnel destination 80.237.249.84
# set interfaces gre unit 0 family inet6 address 
2a01:488:1000:1001:0:50ed:c910:aa01/127
# set security zones security-zone untrust interfaces gre.0 
host-inbound-traffic system-services ping


I already switched to ipv4 which was also not working, so i can rule out 
that this has something to do with ipv6.


A trace on 'security' also showed the following, which I don't really like:
May 23 15:58:32 15:58:31.1697039:CID-0:RT:pak_for_self: No handler 
function found for proto:47, dst-port:2048, drop pkt


There is a second tunnel configured on that linux box to a remote cisco 
device (same config) and this is working properly.


I would appreciate any help,
thanks in advance,
Volker
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Immediate aging in EX-3200

2010-05-23 Thread Emil Katzarski
Hi,

There is no link flap. At the same interface I have about 1000 other
VLANS and about 5000 other MAC's and they don't flap. The problem
affects only a one or a few MAC's at a time.
I should double check for a bridging loop but I don't believe there is one.

I use Junos 10.1R1

Regards: Emil

On Sat, May 22, 2010 at 5:25 PM, Chuck Anderson c...@wpi.edu wrote:
 On Fri, May 21, 2010 at 06:13:42PM +0300, Emil Katzarski wrote:
 I have a switch EX3200-48T with quite simple L2 config. Once in a
 while I can see some MAC addresses learned on the correct interface
 and VLAN, but then (less than a second later) the MAC is deleted.
 I can also see  the Immediate aging counter increasing.

 Are you seeing link flaps or port errors?  Is it possible there is a
 bridging loop?  What JUNOS version are you running?
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] l2circuit communities

2010-05-23 Thread Richard A Steenbergen
On Mon, May 17, 2010 at 10:34:40PM -0400, Truman Boyes wrote:
 Hi Richard, 
 
 You can likely achieve this a different way, (although you approach
 has interested me to check it out), by using CBF based on communities.

Oh and a word of warning before anybody runs out and tries this, doing
this kind of forwarding-table policy to select specific LSPs seems to
SIGNIFICANTLY increase cpu use, to the point of almost never being  
100%:

CPU states: 95.3% user,  0.4% nice,  2.7% system,  1.6% interrupt,  0.0% idle
Mem: 1409M Active, 445M Inact, 298M Wired, 321M Cache, 69M Buf, 1031M Free
Swap: 2048M Total, 2048M Free

  PID USERNAME  THR PRI NICE   SIZERES STATETIME   WCPU COMMAND
 2071 root1 1270   991M   975M RUN192.8H 89.16% rpd

-- 
Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] VRF tunnel on juniper?

2010-05-23 Thread tim tiriche
Hello,

Can i do the following on Juniper router?


CE(1,2,3,etc) - PE1 -- core -- PE2 -- CE2

Currently CE{1,2,3,etc) routes are in PE global table (inet.0).
CE{1,2,3,etc) are internet customers


a) leak CE's prefixes from global to a VRF on PE1
what would be the best way to approach this?
i was thinking of tagging all CE's prefixes with community
and copy it to VRF using rib-groups
or is there a simpler option available?
what kind of vrf instance would this be since there would be no interface
assosiated with it and can routes in this instance be exchanged on VRF in PE2
using iBGP.


b)after prefixes are copied into VRF on PE1.  now CE2 (traffic reinjection)
will send traffic destined to CE1 prefixes via VRF tunnel.

c) Once traffic gets to VRF on PE1, how can it forward it to the appropriate
CE?

is this possible in VRF routing table:

destination CE1 via inet.0 (CE1-interface).
destination CE2 via inet.0 (CE2-interface).

i.e traffic is forwarded out to CE1-interface directly without any
route lookup in inet.0.


Regards,
--tim
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] EX-series monitoring - useful/meaningful box utilisation counters

2010-05-23 Thread Dale Shaw
Hi all,

Just curious what you are all doing to monitor your EX-series boxes.

I'm still learning about the architecture of EX4200 and EX8200 series
devices but apart from the generic RE utilisation counters defined in
JUNIPER-MIB, what else is worth keeping an eye on?

I don't see much of note in JUNIPER-VIRTUALCHASSIS-MIB, apart from the
VC port admin/oper status.

Apart from the network-facing Ethernet interfaces, is there anything
else we should be monitoring for capacity management purposes?

cheers,
Dale
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp