[j-nsp] SNMP ifIndex 0 on MX after ISSU

2013-03-08 Thread Jonas Frey (Probe Networks)
Hello,

did anyone ever notice problems with wrong/changed SNMP ifIndex settings
after ISSU?
We ISSU upgraded a MX from 10.4R9.2 to 11.4R7.5 and after this some of
the ifIndex changed. When doing the ISSU it brought down FPC-1 (which is
a MPC Type 2). Maybe thats why the ifIndex were changed.
(We are running mixed DPCE and MPC)
Anyway now i do have the problem that some of the interfaces do no
longer have a snmp ifIndex at all:

user@router show interfaces ge-1/0/2.1 
  Logical interface ge-1/0/2.1 (Index 333) (SNMP ifIndex 0)
Description: C28711
Flags: SNMP-Traps VLAN-Tag [ 0x8100.141 ]  Encapsulation: ENET2
Input packets : 6785935 
Output packets: 4257005
Protocol inet, MTU: 1500
  Flags: No-Redirects, Sendbcast-pkt-to-re
[...]
(this is a interface on the MPC card)

I saw some posts about this happening on EX but none on MX.

How do i get the ifIndex right? The workaround for EX doesnt help as
there is no such process to restart on MX series.

Best regards,
Jonas





signature.asc
Description: This is a digitally signed message part
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SNMP ifIndex 0 on MX after ISSU

2013-03-08 Thread Tobias Heister
Hi,

Am 08.03.2013 16:33, schrieb Jonas Frey (Probe Networks):
 did anyone ever notice problems with wrong/changed SNMP ifIndex settings
 after ISSU?
 We ISSU upgraded a MX from 10.4R9.2 to 11.4R7.5 and after this some of
 the ifIndex changed.

We had that a couple of time with the MX series (with and without ISSU), the 
last time it happened from 9.6RX to 10.4RX on a couple of systems.
We will soon go from 10.4RX to 11.4RX so i am expecting it to happen again.

 How do i get the ifIndex right? The workaround for EX doesnt help as
 there is no such process to restart on MX series.

I am not aware of a way to fix that. We usually have to fix it in our NMS, 
which is really annoying every time it happens.

regards
Tobias
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Andy Litzinger
We're evaluating SRX clusters as replacements for our aging ASAs FO pairs in 
various places in our network including the Datacenter Edge.  I  was reading 
the upgrade procedure KB: 
http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and started to 
have some heart palpitations.  It seems a complicated procedure fraught with 
peril.  Anyone out there have any thoughts (positive/negative) on their 
experience on upgrading an SRX cluster with minimal downtime?

thanks!
-andy
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Tim Eberhard
I would never, ever follow that KB. It's just asking for a major outage..

With that said, you have two options. 1) ISSU and 2) Reboot both close
to the same time and take the hit. Depending on your hardware it might
be 4 minutes, it might be 8-10 minutes.

If option one is the path you choose to go keep in mind the
limitations and I would suggest you test it in a lab well before you
ever do it in production. ISSU on the SRX is still *very* new. Here is
a list of limitations:
http://kb.juniper.net/InfoCenter/index?page=contentid=KB17946actp=RSS

I've seen ISSU fail more than a couple of times when it was supposed
to be fully supported. This caused us to take a hit, then have to
reboot both devices anyways. So we ended up expecting a hitless
upgrade and got 10 minutes of downtime anyways. If you're up for
running bleeding edge code then maybe ISSU will work properly, but if
availability is that critical you should have a lab to test this in.

Good luck,
-Tim Eberhard

On Fri, Mar 8, 2013 at 9:50 AM, Andy Litzinger
andy.litzin...@theplatform.com wrote:
 We're evaluating SRX clusters as replacements for our aging ASAs FO pairs in 
 various places in our network including the Datacenter Edge.  I  was reading 
 the upgrade procedure KB: 
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and started 
 to have some heart palpitations.  It seems a complicated procedure fraught 
 with peril.  Anyone out there have any thoughts (positive/negative) on their 
 experience on upgrading an SRX cluster with minimal downtime?

 thanks!
 -andy
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Aaron Dewell

Not that I've had to do it - but I'd probably break the cluster to do the 
upgrade and run on one during the procedure.  

On Mar 8, 2013, at 10:50 AM, Andy Litzinger wrote:
 We're evaluating SRX clusters as replacements for our aging ASAs FO pairs in 
 various places in our network including the Datacenter Edge.  I  was reading 
 the upgrade procedure KB: 
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and started 
 to have some heart palpitations.  It seems a complicated procedure fraught 
 with peril.  Anyone out there have any thoughts (positive/negative) on their 
 experience on upgrading an SRX cluster with minimal downtime?
 
 thanks!
 -andy
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Mark Menzies
Yes the upgrade process is not the best.

The link above puts names on tasks to do do effectively split the cluster
in such a way that you can reconnect it without loss of connectivity.

The best approach, which does NOT include minimal downtime is to upgrade
both nodes and then reboot them both at the same time.  Its less
complicated, less prone to error but it does mean that the services are
down for the time it takes for the boxes to boot and bring up all
interfaces.

Its something that I hope Juniper are looking at.


On 8 March 2013 17:50, Andy Litzinger andy.litzin...@theplatform.comwrote:

 We're evaluating SRX clusters as replacements for our aging ASAs FO pairs
 in various places in our network including the Datacenter Edge.  I  was
 reading the upgrade procedure KB:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and
 started to have some heart palpitations.  It seems a complicated procedure
 fraught with peril.  Anyone out there have any thoughts (positive/negative)
 on their experience on upgrading an SRX cluster with minimal downtime?

 thanks!
 -andy
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP ifIndex 0 on MX after ISSU

2013-03-08 Thread Jonas Frey (Probe Networks)
Hi,

btw, i already tried restart mib-proess and restart snmp, none of both
were of any help.
Also i can actually see the ifIndex in /var/db/dcd.snmp_ix (which is 560
for this interface) but while trying to read via snmp it always returns
0 despite carrying traffic.



Am Freitag, den 08.03.2013, 17:42 +0100 schrieb Tobias Heister:
 Hi,
 
 Am 08.03.2013 16:33, schrieb Jonas Frey (Probe Networks):
  did anyone ever notice problems with wrong/changed SNMP ifIndex settings
  after ISSU?
  We ISSU upgraded a MX from 10.4R9.2 to 11.4R7.5 and after this some of
  the ifIndex changed.
 
 We had that a couple of time with the MX series (with and without ISSU), the 
 last time it happened from 9.6RX to 10.4RX on a couple of systems.
 We will soon go from 10.4RX to 11.4RX so i am expecting it to happen again.
 
  How do i get the ifIndex right? The workaround for EX doesnt help as
  there is no such process to restart on MX series.
 
 I am not aware of a way to fix that. We usually have to fix it in our NMS, 
 which is really annoying every time it happens.
 
 regards
 Tobias
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


signature.asc
Description: This is a digitally signed message part
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Eric Van Tol
 -Original Message-
 From: juniper-nsp-boun...@puck.nether.net [mailto:juniper-nsp-
 boun...@puck.nether.net] On Behalf Of Mark Menzies
 Sent: Friday, March 08, 2013 1:03 PM
 To: Andy Litzinger
 Cc: juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] SRX upgrade procedure -ready for enterprise?
 
 The best approach, which does NOT include minimal downtime is to
 upgrade
 both nodes and then reboot them both at the same time.  Its less
 complicated, less prone to error but it does mean that the services
 are
 down for the time it takes for the boxes to boot and bring up all
 interfaces.

I've thought about this a lot lately. At what point do the two nodes start 
communicating with each other after a reboot? What I'm getting at is, could you 
upgrade both, reboot one node, then right before it comes back online fully, 
reboot the other one?  This way, you're not waiting around a full reboot cycle 
for service to return.  

-evt

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Aaron Dewell

I tried ISSU twice, both times on 3 MX routers during a single maintenance 
window, going from 10.x to 11.x.  It failed spectacularly on the second router, 
requiring manual recovery via the console (mastership was not assumed by the 
backup before the primary rebooted), so I completely gave up on the procedure 
and did the rest of the 50+ routers in the network the old-fashioned way, one 
RE at a time, with a 2 minute hit for switchover in the middle.

After that, I don't recommend ISSU to anyone.  It's not worth the hassle, at 
least not yet.  Maybe about 14.x it will be stable enough to use.

On Mar 8, 2013, at 11:10 AM, Tim Eberhard wrote:
 I would never, ever follow that KB. It's just asking for a major outage..
 
 With that said, you have two options. 1) ISSU and 2) Reboot both close
 to the same time and take the hit. Depending on your hardware it might
 be 4 minutes, it might be 8-10 minutes.
 
 If option one is the path you choose to go keep in mind the
 limitations and I would suggest you test it in a lab well before you
 ever do it in production. ISSU on the SRX is still *very* new. Here is
 a list of limitations:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17946actp=RSS
 
 I've seen ISSU fail more than a couple of times when it was supposed
 to be fully supported. This caused us to take a hit, then have to
 reboot both devices anyways. So we ended up expecting a hitless
 upgrade and got 10 minutes of downtime anyways. If you're up for
 running bleeding edge code then maybe ISSU will work properly, but if
 availability is that critical you should have a lab to test this in.
 
 Good luck,
 -Tim Eberhard
 
 On Fri, Mar 8, 2013 at 9:50 AM, Andy Litzinger
 andy.litzin...@theplatform.com wrote:
 We're evaluating SRX clusters as replacements for our aging ASAs FO pairs in 
 various places in our network including the Datacenter Edge.  I  was reading 
 the upgrade procedure KB: 
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and started 
 to have some heart palpitations.  It seems a complicated procedure fraught 
 with peril.  Anyone out there have any thoughts (positive/negative) on their 
 experience on upgrading an SRX cluster with minimal downtime?
 
 thanks!
 -andy
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Andy Litzinger
what pieces of the KB do you think contribute to the possibility of major 
outages?  Not having tested anything it already makes me nervous that failover 
is initiated solely by shutting down the interfaces on the active node...

 -Original Message-
 From: Tim Eberhard [mailto:xmi...@gmail.com]
 Sent: Friday, March 08, 2013 10:11 AM
 To: Andy Litzinger
 Cc: juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] SRX upgrade procedure -ready for enterprise?
 
 I would never, ever follow that KB. It's just asking for a major outage..
 
 With that said, you have two options. 1) ISSU and 2) Reboot both close
 to the same time and take the hit. Depending on your hardware it might
 be 4 minutes, it might be 8-10 minutes.
 
 If option one is the path you choose to go keep in mind the
 limitations and I would suggest you test it in a lab well before you
 ever do it in production. ISSU on the SRX is still *very* new. Here is
 a list of limitations:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17946actp=R
 SS
 
 I've seen ISSU fail more than a couple of times when it was supposed
 to be fully supported. This caused us to take a hit, then have to
 reboot both devices anyways. So we ended up expecting a hitless
 upgrade and got 10 minutes of downtime anyways. If you're up for
 running bleeding edge code then maybe ISSU will work properly, but if
 availability is that critical you should have a lab to test this in.
 
 Good luck,
 -Tim Eberhard
 
 On Fri, Mar 8, 2013 at 9:50 AM, Andy Litzinger
 andy.litzin...@theplatform.com wrote:
  We're evaluating SRX clusters as replacements for our aging ASAs FO pairs
 in various places in our network including the Datacenter Edge.  I  was 
 reading
 the upgrade procedure KB:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and
 started to have some heart palpitations.  It seems a complicated procedure
 fraught with peril.  Anyone out there have any thoughts (positive/negative)
 on their experience on upgrading an SRX cluster with minimal downtime?
 
  thanks!
  -andy
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Mark Tees
From  11.2 R2 onwards you have ICU for SRX100, SRX210, SRX220, SRX240, and 
SRX650

http://www.juniper.net/techpubs/en_US/junos11.4/topics/task/operational/chassis-cluster-upgrading-both-device-with-icu.html

This with the no-tcp-syn-check option (thanks Craig) might possibly make life 
easier. I haven't had a chance to try this yet though.

On 09/03/2013, at 6:49 AM, Andy Litzinger wrote:

 what pieces of the KB do you think contribute to the possibility of major 
 outages?  Not having tested anything it already makes me nervous that 
 failover is initiated solely by shutting down the interfaces on the active 
 node...
 
 -Original Message-
 From: Tim Eberhard [mailto:xmi...@gmail.com]
 Sent: Friday, March 08, 2013 10:11 AM
 To: Andy Litzinger
 Cc: juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] SRX upgrade procedure -ready for enterprise?
 
 I would never, ever follow that KB. It's just asking for a major outage..
 
 With that said, you have two options. 1) ISSU and 2) Reboot both close
 to the same time and take the hit. Depending on your hardware it might
 be 4 minutes, it might be 8-10 minutes.
 
 If option one is the path you choose to go keep in mind the
 limitations and I would suggest you test it in a lab well before you
 ever do it in production. ISSU on the SRX is still *very* new. Here is
 a list of limitations:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17946actp=R
 SS
 
 I've seen ISSU fail more than a couple of times when it was supposed
 to be fully supported. This caused us to take a hit, then have to
 reboot both devices anyways. So we ended up expecting a hitless
 upgrade and got 10 minutes of downtime anyways. If you're up for
 running bleeding edge code then maybe ISSU will work properly, but if
 availability is that critical you should have a lab to test this in.
 
 Good luck,
 -Tim Eberhard
 
 On Fri, Mar 8, 2013 at 9:50 AM, Andy Litzinger
 andy.litzin...@theplatform.com wrote:
 We're evaluating SRX clusters as replacements for our aging ASAs FO pairs
 in various places in our network including the Datacenter Edge.  I  was 
 reading
 the upgrade procedure KB:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and
 started to have some heart palpitations.  It seems a complicated procedure
 fraught with peril.  Anyone out there have any thoughts (positive/negative)
 on their experience on upgrading an SRX cluster with minimal downtime?
 
 thanks!
 -andy
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Andy Litzinger
ICU sounds interesting.  Any idea why it's not supported on the 550? or is that 
just documentation lag?

 -Original Message-
 From: Clay Haynes [mailto:chay...@centracomm.net]
 Sent: Friday, March 08, 2013 3:08 PM
 To: Andy Litzinger; juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] SRX upgrade procedure -ready for enterprise?
 
 I've had really good luck with the ICU Upgrade for branch series. You upload
 the software package to the active SRX, run the commands, and it handles
 copying the package to the backup unit and all reboots. There is still a drop 
 in
 traffic for up to 30 seconds, but for the most part it's much safer than
 upgrading/rebooting both units simultaneously and praying they come up
 properly. Again, ICU is supported on branch-series only, and you have run
 11.2r2 or later for it to be available.
 
 http://www.juniper.net/techpubs/en_US/junos12.1/topics/task/operationa
 l/cha
 ssis-cluster-upgrading-and-aborting-backup-and-primary-device-with-
 icu.html
 
 
 
 I haven't had great luck on ISSU, but then again I don't have many
 datacenter-series boxes to play with (300+ SRX650 and below, about 10
 SRX1400 and above). I would follow this URL, and if you're running any of
 these services in the respective code do not proceed with the ISSU:
 
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17946actp=R
 SS
 
 
 
 - Clay
 
 
 
 
 On 3/8/13 12:50 PM, Andy Litzinger andy.litzin...@theplatform.com
 wrote:
 
 We're evaluating SRX clusters as replacements for our aging ASAs FO
 pairs in various places in our network including the Datacenter Edge.
 I  was reading the upgrade procedure KB:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and
 started to have some heart palpitations.  It seems a complicated
 procedure fraught with peril.  Anyone out there have any thoughts
 (positive/negative) on their experience on upgrading an SRX cluster
 with minimal downtime?
 
 thanks!
 -andy
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Mike Devlin
Mark/Andy,

thanks for the input, i have a cluster of 100s in my lab im going to test
this out on.  Been a nightmare doing it in the past.

looking forward to testing this out now :)



On Fri, Mar 8, 2013 at 6:13 PM, Andy Litzinger 
andy.litzin...@theplatform.com wrote:

 ICU sounds interesting.  Any idea why it's not supported on the 550? or is
 that just documentation lag?

  -Original Message-
  From: Clay Haynes [mailto:chay...@centracomm.net]
  Sent: Friday, March 08, 2013 3:08 PM
  To: Andy Litzinger; juniper-nsp@puck.nether.net
  Subject: Re: [j-nsp] SRX upgrade procedure -ready for enterprise?
 
  I've had really good luck with the ICU Upgrade for branch series. You
 upload
  the software package to the active SRX, run the commands, and it handles
  copying the package to the backup unit and all reboots. There is still a
 drop in
  traffic for up to 30 seconds, but for the most part it's much safer than
  upgrading/rebooting both units simultaneously and praying they come up
  properly. Again, ICU is supported on branch-series only, and you have run
  11.2r2 or later for it to be available.
 
  http://www.juniper.net/techpubs/en_US/junos12.1/topics/task/operationa
  l/cha
  ssis-cluster-upgrading-and-aborting-backup-and-primary-device-with-
  icu.html
 
 
 
  I haven't had great luck on ISSU, but then again I don't have many
  datacenter-series boxes to play with (300+ SRX650 and below, about 10
  SRX1400 and above). I would follow this URL, and if you're running any of
  these services in the respective code do not proceed with the ISSU:
 
  http://kb.juniper.net/InfoCenter/index?page=contentid=KB17946actp=R
  SS
 
 
 
  - Clay
 
 
 
 
  On 3/8/13 12:50 PM, Andy Litzinger andy.litzin...@theplatform.com
  wrote:
 
  We're evaluating SRX clusters as replacements for our aging ASAs FO
  pairs in various places in our network including the Datacenter Edge.
  I  was reading the upgrade procedure KB:
  http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and
  started to have some heart palpitations.  It seems a complicated
  procedure fraught with peril.  Anyone out there have any thoughts
  (positive/negative) on their experience on upgrading an SRX cluster
  with minimal downtime?
  
  thanks!
  -andy
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp


 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX upgrade procedure -ready for enterprise?

2013-03-08 Thread Clay Haynes
I've had really good luck with the ICU Upgrade for branch series. You
upload the software package to the active SRX, run the commands, and it
handles copying the package to the backup unit and all reboots. There is
still a drop in traffic for up to 30 seconds, but for the most part it's
much safer than upgrading/rebooting both units simultaneously and praying
they come up properly. Again, ICU is supported on branch-series only, and
you have run 11.2r2 or later for it to be available.

http://www.juniper.net/techpubs/en_US/junos12.1/topics/task/operational/cha
ssis-cluster-upgrading-and-aborting-backup-and-primary-device-with-icu.html



I haven't had great luck on ISSU, but then again I don't have many
datacenter-series boxes to play with (300+ SRX650 and below, about 10
SRX1400 and above). I would follow this URL, and if you're running any of
these services in the respective code do not proceed with the ISSU:

http://kb.juniper.net/InfoCenter/index?page=contentid=KB17946actp=RSS



- Clay




On 3/8/13 12:50 PM, Andy Litzinger andy.litzin...@theplatform.com
wrote:

We're evaluating SRX clusters as replacements for our aging ASAs FO pairs
in various places in our network including the Datacenter Edge.  I  was
reading the upgrade procedure KB:
http://kb.juniper.net/InfoCenter/index?page=contentid=KB17947  and
started to have some heart palpitations.  It seems a complicated
procedure fraught with peril.  Anyone out there have any thoughts
(positive/negative) on their experience on upgrading an SRX cluster with
minimal downtime?

thanks!
-andy
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp