Re: [j-nsp] Tricks for killing L2 loops in VPLS and STP BPDU-less situations?

2012-08-20 Thread Christopher E. Brown


One think I noticed when working with the BUM filter under VPLS instance
is that there is no way to declare a per instance policer that I could find.

Your can call the same filter/policer in multiple VPLS instances, but
the named policer is a single global instance.  So, if you call the same
filter w/ 5Mbit policer in 20 instances it is not 20 seperate 5 mbit
policers, it is one policer shared across all.


I very much want to add a BUM policer by default to all VPLS instances,
but I really want to avoid creating a seperate filter and policer config
for each instance, when 95% of them would be running one of three
standard configs.


And yes, this was tested.  10.4R10, trio based (MX960s w/ MPC2 and MX80)
the policer was always shared.


On 8/17/12 3:20 PM, Chris Kawchuk wrote:
 Hi Clarke,
 
 We pass through BPDUs through VPLS the MX'es- but yes, miscreant users / 
 switches will always be a problem.
 
 We do the following to every customer-facing VPLS instance, but only #3 would 
 help you here:
 
 1. Mac Limiting per VPLS Interface (100) (i.e per 'site')
 2. Mac Limiting per VPLS (500)
 3. Limit Broadcast/Unknown Unicast/Multicast Traffic (5 Mbit) into the VPLS
 
 You can put on an input firewall filter which calls a 5 Mbit policer at 
 [routing instances vpls-name forwarding-options family vpls ] to start 
 limiting this type of traffic into the 'bridge domain' at any time.
 
 - CK.
 
 
 On 18/08/2012, at 1:08 AM, Clarke Morledge chm...@wm.edu wrote:
 
 We have had the unfortunate experience of having users plug in small 
 mini-switches into our network that have the capability of filtering out 
 (by-default) BPDUs while allowing other traffic through.  The nightmare 
 situation is when a user plugs in such a switch accidentally into two of our 
 EX switches.  Traffic will loop through the miscreant switch between the two 
 EXs and without BPDUs it just looks like MAC addresses keep moving between 
 the real source and the two EXs.

 In an MX environment running VPLS, this problem can happen easily as there 
 are no BPDUs even to protect against loops in VPLS, particularly when your 
 VPLS domain ties into a Spanning Tree domain downstream where your potential 
 miscreant switch may appear.

 I am curious to know if anyone has come up with strategies to kill these 
 loops for EXs running Spanning Tree and/or MXs running VPLS. Rate-limiting 
 may help, but it doesn't kill loops completely.  I am looking for ways to 
 detect lots of MAC address moves (without polling for them) and blocking 
 those interfaces involved when those MAC moves exceed a certain threshold 
 via some trigger mechanism.

 Assume Junos 10.4R10 or more recent.

 Clarke Morledge
 College of William and Mary
 Information Technology - Network Engineering
 Jones Hall (Room 18)
 Williamsburg VA 23187
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] SRX240H Cluster SNMP

2012-08-20 Thread Eric Van Tol
All,
Is there a version above 11.2 where SNMP works properly in a cluster?  Seems 
that when running various versions (11.2R7.4 and 11.4R4.4, so far) on a 240H 
cluster, SNMP doesn't work properly and starts spitting out 'noSuchObject' 
errors on perfectly valid queries like when querying the interfaces MIB.  I 
should also mention that the OIDs it seems to have a problem with are primarily 
ones that have to do with the backup chassis in redundancy-group 0 (ge-5/0/0 
through ge-5/0/15).  JTAC has thus far been unsuccessful at assisting me.

I have downgraded to 10.4R10.7 on a non-production cluster and it's working 
successfully, but I really want to take advantage of the global address book.  
I can certainly live without it, but it does make things much easier.

Thanks in advance,
evt

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX240H Cluster SNMP

2012-08-20 Thread Wayne Tucker
I have a couple of SRX240H clusters running 11.2R6.  I also have an
SRX650 cluster running 11.2S6.  I don't see anything in my logs to
indicate that I'm getting errors and none of my graphs show signs of
failed polls.

I doubt it matters, but I'm polling the devices through their loopback
interfaces.  I also filter out some of the interfaces and filter
duplicates:

 show configuration snmp | display inheritance | except ##
filter-interfaces {
interfaces {
fxp2;
gre;
ipip;
lo0.16384;
lo0.16385;
lo0.32768;
lsi;
mtun;
pimd;
pime;
tap;
}
}
filter-duplicates;
[snip]

Does it seem to happen the most when there are lots of queries going
through?  Are you doing row-based or column-based queries (one
interface at a time or the same counter across several interfaces)?
The former is supposed to perform better (so, for instance, an
snmpwalk is fairly processor intensive).

Any signs of trouble on your control or fabric interfaces?

Has JTAC already had you enable tracing for SNMP?

:w



On Mon, Aug 20, 2012 at 8:51 AM, Eric Van Tol e...@atlantech.net wrote:
 All,
 Is there a version above 11.2 where SNMP works properly in a cluster?  Seems 
 that when running various versions (11.2R7.4 and 11.4R4.4, so far) on a 240H 
 cluster, SNMP doesn't work properly and starts spitting out 'noSuchObject' 
 errors on perfectly valid queries like when querying the interfaces MIB.  I 
 should also mention that the OIDs it seems to have a problem with are 
 primarily ones that have to do with the backup chassis in redundancy-group 0 
 (ge-5/0/0 through ge-5/0/15).  JTAC has thus far been unsuccessful at 
 assisting me.

 I have downgraded to 10.4R10.7 on a non-production cluster and it's working 
 successfully, but I really want to take advantage of the global address book. 
  I can certainly live without it, but it does make things much easier.

 Thanks in advance,
 evt

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX240H Cluster SNMP

2012-08-20 Thread Eric Van Tol
Hi Wayne,
Answers inline.

 I doubt it matters, but I'm polling the devices through their
 loopback
 interfaces.  I also filter out some of the interfaces and filter
 duplicates:

I do the same thing.  Just for the Hell of it, I tried to poll through the fxp0 
port, but the same thing happens.

 Does it seem to happen the most when there are lots of queries going
 through?  

The issue is really just trying add the device to my NMS.  The NMS sends out 
Get requests for all the interfaces to add them into its database.  I have no 
problems doing this for a 3600 cluster or really any other Juniper devices.  

 Any signs of trouble on your control or fabric interfaces?

Not that I can tell.  No errors or drops.

 Has JTAC already had you enable tracing for SNMP?

They made me get a capture of the queries, which I sent to them, but because 
the SRX was sending get-response packets back, that seemed to indicate to the 
JTAC engineer that there was no problem.  What he didn't do was actually look 
at the responses where the SRX is sending 'noSuchObject' back for valid 
interface objects.  Performing a 'show snmp mib walk oid' for one of the OIDs 
for which a 'noSuchObject' was sent elicits an incredibly slow response time 
from the CLI with an eventual output of the information contained within that 
OID.

Maybe I'll try 11.2R6 and see if that version works.  The SRX3600 cluster is 
running 11.2R7.4 and I'm not seeing the same problems.  It's specifically 
related to the SRX240, from what I can tell, as both the production cluster and 
the lab cluster exhibit the same behavior.

-evt

 :w
 
 
 
 On Mon, Aug 20, 2012 at 8:51 AM, Eric Van Tol e...@atlantech.net
 wrote:
  All,
  Is there a version above 11.2 where SNMP works properly in a
 cluster?  Seems that when running various versions (11.2R7.4 and
 11.4R4.4, so far) on a 240H cluster, SNMP doesn't work properly and
 starts spitting out 'noSuchObject' errors on perfectly valid queries
 like when querying the interfaces MIB.  I should also mention that
 the OIDs it seems to have a problem with are primarily ones that have
 to do with the backup chassis in redundancy-group 0 (ge-5/0/0 through
 ge-5/0/15).  JTAC has thus far been unsuccessful at assisting me.
 
  I have downgraded to 10.4R10.7 on a non-production cluster and it's
 working successfully, but I really want to take advantage of the
 global address book.  I can certainly live without it, but it does
 make things much easier.
 
  Thanks in advance,
  evt
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX240H Cluster SNMP

2012-08-20 Thread Mikkel Mondrup Kristensen
Hi Eric,

I had the same issue on my srx240 cluster and a friendly soul found PR800735 
for me that mentioned a workaround by doing set snmp filter-interfaces 
interfaces gr-0/0/0 that made my Observium instance able to poll the cluster 
without timeouts.

/Mikkel

On Aug 20, 2012, at 21:22 , Eric Van Tol e...@atlantech.net wrote:

 Hi Wayne,
 Answers inline.
 
 I doubt it matters, but I'm polling the devices through their
 loopback
 interfaces.  I also filter out some of the interfaces and filter
 duplicates:
 
 I do the same thing.  Just for the Hell of it, I tried to poll through the 
 fxp0 port, but the same thing happens.
 
 Does it seem to happen the most when there are lots of queries going
 through?  
 
 The issue is really just trying add the device to my NMS.  The NMS sends out 
 Get requests for all the interfaces to add them into its database.  I have no 
 problems doing this for a 3600 cluster or really any other Juniper devices.  
 
 Any signs of trouble on your control or fabric interfaces?
 
 Not that I can tell.  No errors or drops.
 
 Has JTAC already had you enable tracing for SNMP?
 
 They made me get a capture of the queries, which I sent to them, but because 
 the SRX was sending get-response packets back, that seemed to indicate to the 
 JTAC engineer that there was no problem.  What he didn't do was actually look 
 at the responses where the SRX is sending 'noSuchObject' back for valid 
 interface objects.  Performing a 'show snmp mib walk oid' for one of the 
 OIDs for which a 'noSuchObject' was sent elicits an incredibly slow response 
 time from the CLI with an eventual output of the information contained within 
 that OID.
 
 Maybe I'll try 11.2R6 and see if that version works.  The SRX3600 cluster is 
 running 11.2R7.4 and I'm not seeing the same problems.  It's specifically 
 related to the SRX240, from what I can tell, as both the production cluster 
 and the lab cluster exhibit the same behavior.
 
 -evt
 
 :w
 
 
 
 On Mon, Aug 20, 2012 at 8:51 AM, Eric Van Tol e...@atlantech.net
 wrote:
 All,
 Is there a version above 11.2 where SNMP works properly in a
 cluster?  Seems that when running various versions (11.2R7.4 and
 11.4R4.4, so far) on a 240H cluster, SNMP doesn't work properly and
 starts spitting out 'noSuchObject' errors on perfectly valid queries
 like when querying the interfaces MIB.  I should also mention that
 the OIDs it seems to have a problem with are primarily ones that have
 to do with the backup chassis in redundancy-group 0 (ge-5/0/0 through
 ge-5/0/15).  JTAC has thus far been unsuccessful at assisting me.
 
 I have downgraded to 10.4R10.7 on a non-production cluster and it's
 working successfully, but I really want to take advantage of the
 global address book.  I can certainly live without it, but it does
 make things much easier.
 
 Thanks in advance,
 evt
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp