Re: [j-nsp] 12.1X for SRX

2014-08-25 Thread Clay Haynes
Been running 12.1x44 and 12.1x46 on multiple deployments as well without
hitting any major bugs such as core dumps or SNMP not working.

5% of our deployments do have IPS enabled, and we have not run into any
issues with them.

None of our deployments do not have UTM features enabled, so I cannot
vouch for that feature set.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Steel-Belted RADIUS backups

2013-08-29 Thread Clay Haynes
How about a MAG running IC + RADIUS License? It's not FreeRADIUS :)

In all seriousness perhaps you can script an export using the LDAP tools,
and import that back in?

http://www.juniper.net/techpubs/software/aaa_802/sbrc/sbrc70/sw-sbrc-admin/
html/LDAPConfig6.html#334279






On 8/29/13 5:10 AM, "Dale Shaw"  wrote:

>Hi all,
>
>Does anyone out there use SBR?
>
>We have the Global Enterprise Edition (GEE) version v6.1.7 running on
>Linux.
>
>I'm putting something in place to back up SBR itself; currently we
>just tar up /opt/JNPRsbr/radius (after stopping sbrd) but it's
>occurred to me that we have never tested a recovery using this method.
>
>JTAC are telling me there is no automated way to perform the XML
>export function normally performed in the GUI. The product docs don't
>make it clear whether taking a copy of everything in
>/opt/JNPRsbr/radius/ is enough, or whether the XML export is also
>required.
>
>Looking at what the supplied install/upgrade scripts do, it's just a
>recursive 'cp' with some unnecessary folders excluded.
>
>We also take backups of the VM guest that's running SBR but I'm not
>familiar enough with SBR's back-end databases to know whether that
>results in a recoverable data set; there'll be open files for sure
>(hence the stop;tar;start method described above).
>
>What do you do?  "use FreeRADIUS instead" is a valid but unwelcome
>response :-))
>
>Cheers,
>Dale
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper dead J series?

2013-08-01 Thread Clay Haynes
On 8/1/13 8:22 PM, "James Baker"  wrote:


>+1
>
>Occurred with  a SSG520; exact same problems
>
>RMA was the only solution (Fun at 3am Monday Morning)
>
>
>
>>-Original Message-
>>From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
>>Of Adam Leff
>>Sent: Friday, 2 August 2013 8:17 a.m.
>>To: David Gee
>>Cc: juniper-nsp@puck.nether.net
>>Subject: Re: [j-nsp] Juniper dead J series?
>>
>>David-
>>
>>I've experienced this about 7 times in the past two years, all on
>>J-series
>>routers purchased in the 2008-2009 time period... whether it was a
>>clean/coordinating power-off or if it was an unexpected power loss / UPS
>>run-
>>out.
>>
>>JTAC originally thought it was a memory issue, as there is a PSN about a
>>known
>>memory problem on a particular batch of J-series routers.  However, I've
>>still
>>had boxes with replaced memory die as you have described.
>> Current indications from JTAC are that it's a CPU/BIOS issue - we don't
>>even
>>get output on the console or anything resembling POST during boot.
>> Our latest failure has been sent to Juniper's manufacturing partner for
>>further
>>analysis.
>>
>>The only solution for us so far is to keep RMA'ing the dead boxes.  Very
>>frustrating, indeed.
>>
>>~Adam
>>
>>
>>
>>On Thu, Aug 1, 2013 at 1:19 PM, David Gee  wrote:
>>
>>> Hi all,
>>>
>>>
>>>
>>> Second post from me in the same month! Scary.
>>>
>>>
>>>
>>> So, long story short. Router went offline after a power outage. Didn't
>>> come back. Remote hands consoled in and reported back:
>>>
>>>
>>>
>>> "All four LED's are on permanently. We've unplugged, plugged it back
>>> in, rebooted and rebooted some more. No console output".
>>>
>>>
>>>
>>> I've had a quick look around but can't find anything specific. I'm
>>> thinking at this point it's either, RAM, CPU or possibly compact
>>> flash? The DC is a few miles away, so contemplating jumping on Ebay
>>> and buying some spares to cover as many realistic outcomes as
>>> possible. Any thoughts or experience with this?
>>>
>>>
>>>
>>> Thanks
>>>
>>> David
>>>
>>> ___
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>
>>___
>>juniper-nsp mailing list juniper-nsp@puck.nether.net
>>https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp

I've had this happen on many (20+) SSG520m and/or SSG550m we deployed at
%DAYJOB%. Every single one would reboot (core-dump, power loss) and never
come back online. Even loading firmware manually wouldn't fix the problem.
Only thing you could do was RMA them.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX Reliability

2013-06-12 Thread Clay Haynes
On 6/12/13 2:10 PM, "Paul Stewart"  wrote:


>
>
>On 2013-06-12 1:18 PM, "Brent Jones"  wrote:
>
>>On Wed, Jun 12, 2013 at 5:41 AM, Andrew Gabriel
>>wrote:
>>
>>> On Wed, Jun 12, 2013 at 3:58 PM, Phil Mayers >> >wrote:
>>>
>>> > We recently evaluated an SRX 3600, and modulo some minor cosmetic
>>>bugs
>>> and
>>> > one major one (PSN-2012-10-754, fixed in later software) they seemed
>>> solid
>>> > to me. We tested IPv4 & IPv6 layer4 firewalling, AppFW, dynamic
>>>routing
>>> > with BGP and multicast. It all seemed to work ok, and we have gone
>>>ahead
>>> > and purchased.
>>> >
>>> > It might help if you could specify what sort of things you want to do
>>>on
>>> > them e.g. IPsec, IDP, inline AV/web filtering (which the 3000s can't
>>>do)
>>> > and so forth.
>>> >
>>>
>>> Hi Phil,
>>>
>>> Thanks, we are mainly looking at basic FW, VPN, and routing capability,
>>> which we need to be rock solid. We do not intend to use the IPS and UTM
>>> type features at the moment.
>>>
>>> Thanks,
>>> -Andrew.
>>>
>>>
>>>
>>>
>>We have several sets of SRX1400s in chassis cluster, plus dozens of SRXs
>>from SRX100's up to SRX240's throughout various offices.
>>We've had minor bugs here and there, but they get resolved through code
>>or
>>workarounds, no more bugs than other vendors really.
>>Early on, yes, pre-10, tons of bugs, but 10.4 and greater are solid.
>>We do various NAT, FW, VPNs, routing instances, etc, no issues to report.
>
>I'd echo Brent's comments above - we have just over 120 SRX's in
>deployment currently and have very few issues.  Make sure you size them
>appropriately to the task if using UTM.  Yes, as mentioned before 10.x
>there was a lot of issues but we mainly deploy now at 11.4 and they are
>solid.
>
>
>Paul
>
>
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp



I echo the same sentiments as everyone else too. If there is a problem
JTAC and ATAC are very good at narrowing down the issue and finding
workarounds. The CLI is absolutely top-notch compared to other vendors,
especially ScreenOS. If you are looking for a good WebUI, you may want to
look at Space instead of the actual WebUI on the SRX.

If you're coming from ScreenOS there are some learning curves for VPN
tunnels, NAT policies (NAT is separate from the policy), and things such
as GRE Tunnel keepalives (look at RPM Monitors - they're awesome!)

Hit me up off list if you have any questions.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Clay Haynes
I've had really good luck with the ICU Upgrade for branch series. You
upload the software package to the active SRX, run the commands, and it
handles copying the package to the backup unit and all reboots. There is
still a drop in traffic for up to 30 seconds, but for the most part it's
much safer than upgrading/rebooting both units simultaneously and praying
they come up properly. Again, ICU is supported on branch-series only, and
you have run 11.2r2 or later for it to be available.

http://www.juniper.net/techpubs/en_US/junos12.1/topics/task/operational/cha
ssis-cluster-upgrading-and-aborting-backup-and-primary-device-with-icu.html



I haven't had great luck on ISSU, but then again I don't have many
datacenter-series boxes to play with (300+ SRX650 and below, about 10
SRX1400 and above). I would follow this URL, and if you're running any of
these services in the respective code do not proceed with the ISSU:

http://kb.juniper.net/InfoCenter/index?page=content&id=KB17946&actp=RSS



- Clay




On 3/8/13 12:50 PM, "Andy Litzinger" 
wrote:

>We're evaluating SRX clusters as replacements for our aging ASAs FO pairs
>in various places in our network including the Datacenter Edge.  I  was
>reading the upgrade procedure KB:
>http://kb.juniper.net/InfoCenter/index?page=content&id=KB17947  and
>started to have some heart palpitations.  It seems a complicated
>procedure fraught with peril.  Anyone out there have any thoughts
>(positive/negative) on their experience on upgrading an SRX cluster with
>minimal downtime?
>
>thanks!
>-andy
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MAG2600 versus 4610 and above

2013-01-31 Thread Clay Haynes
David,
Have you looked at the DTE Version of the SSLVPN? It's a full Virtual SA
Appliance that allows you to test out just about any feature. If you have
a valid support contract and/or are a partner you should be covered to
test out the features.


- Clay





On 1/31/13 7:06 AM, "David Gee"  wrote:

>Hi group,
>
> 
>
>I was hoping if one of you could answer me the following question. When it
>comes to base functionality, are the features on the MAG2600 identical to
>the MAG4610 and above? I appreciate scalability is massively different,
>but
>it¹s more the configuration and base features I am worried about. Is the
>MAG2600 missing anything barring guts? I want to be able to lab out as
>many
>of the SSL/UAC  features without spending £4k on a box and I¹m hoping the
>2600 covers everything I need. My intention is to run the lab license for
>the twelve months whilst under a service contract. Will this be ok? Does
>anyone know any different?
>
> 
>
>Thanks
>
>David
>
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] GRE Traffic

2012-10-17 Thread Clay Haynes
Hello Mohammad,
It depends - what's is the device terminating the GRE tunnel? MX, SRX, J,
etc.?


- Clay 





On 10/17/12 2:29 AM, "Mohammad Khalil"  wrote:

>Hi , I have a loopback interface configured as the source address of the
>GRE tunnel
>There is a firewall filter applied , what is the best way to permit the
>GRE
>traffic?
>
>Thanks
>
>BR,
>Mohammad
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Selective packet mode & local traffic

2012-08-10 Thread Clay Haynes

On 8/10/12 11:33 AM, "Wayne Tucker"  wrote:

>You can probably achieve that using apply-path.  This book has several
>good examples:
>
>http://www.juniper.net/us/en/community/junos/training-certification/day-on
>e/fundamentals-series/securing-routing-engine/
>
>:w
>
>
>On Thu, Aug 9, 2012 at 7:37 AM, Mark Menzies  wrote:
>> Yup, we can do selective packet mode using firewall filters.
>>
>> Its normally applied in the input direction however, note, it needs to
>>be
>> on all interfaces where we will see packets that we dont want to send to
>> the flow module, ie the reply packets as well
>>
>> As for a script, sadly dont have one, however if you do get one, I would
>> like to have a copy.  :)
>>
>> On 9 August 2012 15:13, Phil Mayers  wrote:
>>
>>> All,
>>>
>>> On the J-series and branch SRX, if you want to use selective packet
>>>mode
>>> (because you want to do IPSec at the same time as MPLS, for example)
>>>then,
>>> as I understand it, you need to exclude traffic *to* the box itself
>>>from
>>> packet mode.
>>>
>>> Is this correct?
>>>
>>> Does anyone have a handy op-script that will build a prefix list of all
>>> local IPs, to help with automating this?
>>> __**_
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> 
>>>https://puck.nether.net/**mailman/listinfo/juniper-nsp>>er.net/mailman/listinfo/juniper-nsp>
>>>
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp


Try this and see if it works/is acceptable:

+  policy-options {
+  prefix-list interfaces {
+  apply-path "interfaces <*> unit <*> family inet address <*>";
+  }
+  }



Here's the output that you'll get (note that it will take the entire
subnet that the interface/unit is configured for):


chaynes@srx100-1# show | compare | display inheritance
[edit]
+  policy-options {
+  prefix-list interfaces {
  ##
  ## apply-path was expanded to:
  ## 172.16.1.0/24;
  ## 172.16.100.0/24;
  ## 10.0.0.0/24;
  ##
+  apply-path "interfaces <*> unit <*> family inet address <*>";
+  }
+  }





- Clay


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Quick Question About HA Setup

2012-07-17 Thread Clay Haynes
I believe the command was "configure exclusive" in order to perform a
commit confirmed on a cluster prior to 11.4; however this did have the
side effect of only allowing one user to configure the SRX cluster at a
time. Also there are no guarantees that the rollback would actually work
(hence why it was unsupported).

- Clay






On 7/17/12 6:43 AM, "Pavel Lunin"  wrote:

>
>HmŠ didn't know that, thanks.
>
>And how about to share the unsupported way? (could not realize it myself)
>
>> Commit confirmed came into clusters in 11.4 ...
>>
>> Could always do it is unsupported ways before ... But now you can do
>> it supported in 11.4rx ...
>>
>
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Quick Question About HA Setup

2012-07-16 Thread Clay Haynes
SRX Technical Note 21 will have the Design Considerations and Deployment
Scenarios you need. This link does require an account to login first.

http://kb.juniper.net/InfoCenter/index?page=content&id=TN21



- Clay







On 7/16/12 5:04 AM, "Spam"  wrote:

>Is it possible to connect 2 SRX devices together into a HA Cluster by
>connecting
>the Control & Fabric Interlinks via switches or must they be directly
>connected.
>
>My planned setup is as follows:
>
>SRX<->Switch<->10GB Xconnect<->Switch<->SRX
>
>I can also give each connection is own dedicated VLAN if that would help.
>
>Spammy
>
>
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] reth physical link not enabled

2012-06-11 Thread Clay Haynes

On 6/11/12 5:49 AM, "roland DROUAL"  wrote:

>Hello the List,
>
>I have a problem with an redundant interface reth2.
>I think the configuration is right, but I can't ping the @ IP
>The physical link is up, but the redundant interface is not enabled.
>How can I do, to have the reth2 physical link enabled ?
>
>Thanks for your help.
>
>Roland DROUAL
>
>
>toto@AS-SRX650-01# show interfaces
>ge-6/0/21 {
> gigether-options {
> redundant-parent reth2;
> }
>}
>ge-15/0/21 {
> gigether-options {
> redundant-parent reth2;
> }
>}
>...
>reth2 {
> description "802.1Q vers DMZ1";
> vlan-tagging;
> redundant-ether-options {
> redundancy-group 1;
> }
> unit 10 {
> vlan-id 10;
> family inet {
> address 193.48.41.193/29;
> }
> }
>}
>
>==
>toto@AS-SRX650-01# run show interfaces
>Physical interface: ge-6/0/21, Enabled, Physical link is Up
>   Interface index: 157, SNMP ifIndex: 299
>   Link-level type: Ethernet, MTU: 1518, Link-mode: Full-duplex, Speed:
>1000mbps, BPDU Error: None, MAC-REWRITE Error: None,
>   Loopback: Disabled, Source filtering: Disabled, Flow control:
>Enabled, Auto-negotiation: Enabled, Remote fault: Online
>   Device flags   : Present Running
>   Interface flags: SNMP-Traps Internal: 0x0
>   CoS queues : 8 supported, 8 maximum usable queues
>   Current address: 00:26:88:e2:a5:bd, Hardware address: 00:26:88:e2:a5:bd
>   Last flapped   : 2012-06-08 01:35:48 UTC (1d 16:47 ago)
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>   Active alarms  : None
>   Active defects : None
>   Interface transmit statistics: Disabled
>
>   Logical interface ge-6/0/21.10 (Index 98) (SNMP ifIndex 597)
> Flags: SNMP-Traps 0x0 VLAN-Tag [ 0x8100.10 ]  Encapsulation: ENET2
> Input packets : 0
> Output packets: 0
> Security: Zone: Null
> Protocol aenet, AE bundle: reth2.10   Link Index: 0
>
>   Logical interface ge-6/0/21.32767 (Index 97) (SNMP ifIndex 578)
> Flags: SNMP-Traps 0x0 VLAN-Tag [ 0x.0 ]  Encapsulation: ENET2
> Input packets : 0
> Output packets: 0
> Security: Zone: Null
> Protocol aenet, AE bundle: reth2.32767   Link Index: 0
>
>
>But I can't see the interface reth2. I should see:
>- "Physical interface: reth2, Enabled, Physical link is Up "
>- "  Logical interface reth2.10 "
>and I don't see them. the physical interface reth2 seems to be not
>enabled.
>
>=
>And so, there is no route for the range IP 193.48.41.192/29, via reth2.10
>
>toto@AS-SRX650-01# run show route
>
>inet.0: 11 destinations, 11 routes (10 active, 0 holddown, 1 hidden)
>+ = Active Route, - = Last Active, * = Both
>
>10.1.3.0/29*[Direct/0] 1d 18:58:31
> > via reth0.201
>10.1.3.1/32*[Local/0] 1d 18:58:31
>   Local via reth0.201
>10.1.4.0/29*[Direct/0] 1w3d 20:07:04
> > via reth1.100
>10.1.4.2/32*[Local/0] 1w3d 20:07:04
>   Local via reth1.100
>10.96.0.0/11   *[Static/5] 1w1d 16:00:44
> > to 10.1.4.1 via reth1.100
>10.115.0.0/16  *[Direct/0] 1w3d 21:08:07
> > via fxp0.0
>10.115.7.11/32 *[Local/0] 1w3d 21:08:07
>   Local via fxp0.0
>10.192.0.0/11  *[Static/5] 1d 18:58:31
> > to 10.1.3.2 via reth0.201
>195.221.125.204/30 *[Direct/0] 1d 19:22:26
> > via reth0.955
>195.221.125.206/32 *[Local/0] 1d 19:56:26
>   Local via reth0.955
>
>The interface reth2 is in a security zone:
>
> security-zone DMZ {
> host-inbound-traffic {
> system-services {
> all;
> }
> protocols {
> all;
> }
> }
> interfaces {
> reth2.10;
> }
> }
> }
>}
>
>
>
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net
>https://puck.nether.net/mailman/listinfo/juniper-nsp



Can you run a "show configuration chassis"? I'm guessing your reth-count
is set too low (should be reth-count 3).




- Clay


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] What does AS path attribute problem mean?

2011-09-09 Thread Clay Haynes
On Fri, Sep 9, 2011 at 1:07 PM, Jared Mauch  wrote:

>Well, the update is well formatted and proper, the handling in JunOS
> is buggy.  You don't want to just blackhole unkown items like this as you
> can
> create a significant problem for others similar to the bogon problems
> that exist.
>
>This type of a fix is ONLY a short term fix to workaround your buggy
> software.
>
>- Jared
>
> On Fri, Sep 09, 2011 at 12:58:36PM -0400, Andrew Parnell wrote:
> > We noticed this as well on a couple of our M7i running 9.x series
> > code, but not on others running 10.x.  This is being caused by a
> > particular prefix (212.118.142.0/24):
> >
> > rpd[5239]: xx.xx.253.192 (Internal AS xx) Received BAD update for
> > family inet-unicast(1), prefix 212.118.142.0/24
> >
> > The easy solution is to simply filter out the offending prefix.  There
> > are many ways this can be done, but the following did the trick for
> > us:
> >
> > policy-options {
> > prefix-list bad-prefixes {
> > 212.118.142.0/24;
> > }
> > policy-statement BGP-Import {
> > term block-bad-prefixes {
> > from {
> > prefix-list bad-prefixes;
> > }
> > then reject;
> > }
> > }
> >
> > Apply something like this to your BGP import and/or export policy as
> > appropriate and you should be fine.
> >
> > Andrew
> >
> > On Fri, Sep 9, 2011 at 11:41 AM, Markus  wrote:
> > > All of a sudden without changing anything in the config I'm getting the
> > > following on a M7i running 8.0R2.8:
> > >
> > > rpd[3019]: bgp_read_v4_update: NOTIFICATION sent to 89.146.xx.49
> (External
> > > AS ): code 3 (Update Message Error) subcode 11 (AS path attribute
> > > problem)
> > >
> > > The other end (Cisco) is getting:
> > >
> > > %BGP-3-NOTIFICATION: received from neighbor 89.146.xx.50 3/11 (invalid
> or
> > > corrupt AS path) 0 bytes
> > >
> > > This is causing the BGP session to flap. It happens at arbitrary
> intervals,
> > > sometimes once a minute, sometimes just once in an hour. CFEB and RE
> CPU are
> > > at steady 100% when it happens.
> > >
> > > What can I do about this and what could be the cause? Help! :)
> > >
> > > Thanks!
> > > Markus
> > >
> > > ___
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> > >
> > > __
> > > This email has been scanned by the MessageLabs Email Security System.
> > > For more information please visit http://www.messagelabs.com/email
> > > __
> > >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> --
> Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
> clue++;  | http://puck.nether.net/~jared/  My statements are only
> mine.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>


I have a case open with TAC on this. They recommended temporarily filtering
out the bad prefix (as others have mentioned in this thread), and upgrading
to JUNOS 10.2 or later which doesn't appear to have the issue. In the
meantime they're looking for the root cause of the flap and seeing if
there's a different way to fix it for older supported versions of JUNOS.


Regards,
Clay
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp