Re: [j-nsp] jnxOperatingTemp issues on ex4500?

2012-02-18 Thread JP Velders
 (i.e. 
zero).

 Date: Thu, 16 Feb 2012 12:53:42 -0700
 From: Jonathan Call lordsit...@hotmail.com
 Subject: [j-nsp] jnxOperatingTemp issues on ex4500?

 If I run 'show snmp mib walk jnxOperatingTemp' [ ... ] on an 
 ex4500-40f all of the entries return a non-operational status
 (i.e. zero). All of them are running 11.4R1.6.

Same here on 2 mixed VC's (4200-48t+4500-40f) and 1 non-mixed.

Only non-zero ones in a mixed VC are:
jnxOperatingTemp.7.1.0.0 = 0
jnxOperatingTemp.7.2.0.0 = 30
jnxOperatingTemp.7.3.0.0 = 36
jnxOperatingTemp.7.4.0.0 = 0

Which would equate to the 4200's being members 12 (0-3 total)...

Kind regards,
JP Velders

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Memory issues on 10.1R3.7 (J6350)

2012-02-18 Thread Maciej Jan Broniarz
Hi,

I have a j6350 box, with 2gb ram,  running 10.1R3.7.

For the second time in the last 24 hours my router started to have problems 
with bgp:

Feb 18 20:40:48  j-6350 snmpd[1007]: SNMPD_HEALTH_MON_INSTANCE: Health Monitor: 
jroute daemon memory usage (Network time process): new instance detected 
(variable: sysApplElmtRunMemory.5)
Feb 18 20:40:50  j-6350 snmpd[1007]: SNMPD_HEALTH_MON_INSTANCE: Health Monitor: 
jkernel daemon memory usage (Network time process): new instance detected (var
iable: sysApplElmtRunMemory.3)
Feb 18 20:42:52  j-6350 rpd[1009]: bgp_hold_timeout:3643: NOTIFICATION sent to 
XX (External AS XX): code 4 (Hold Timer Expired Error), Reason: holdtime 
expired for 952.268 (External AS 12968), socket buffer sndcc: 95 rcvcc: 0 TCP 
state: 4, snd_una: 1099361751 snd_nxt: 1099361808 snd_wnd: 17018 rcv_nxt: 
1316583776 rcv_adv: 1316600160, hold timer 0
Feb 18 20:43:43  j-6350 rpd[1009]: bgp_hold_timeout:3643: NOTIFICATION sent to 
123.456 (External AS XXYY): code 4 (Hold Timer Expired Error), Reason: holdtime 
expired for 123.456 (External AS XXYY), socket buffer sndcc: 57 rcvcc: 0 TCP 
state: 4, snd_una: 1518297759 snd_nxt: 1518297816 snd_wnd: 65000 rcv_nxt: 
3880145662 rcv_adv: 3880162046, hold timer 0
Feb 18 20:43:47  j-6350 rpd[1009]: bgp_hold_timeout:3643: NOTIFICATION sent to 
567.890 (External AS BBCC): code 4 (Hold Timer Expired Error), Reason: holdtime 
expired for 567.890 (External AS BBCC), socket buffer sndcc: 57 rcvcc: 0 TCP 
state: 4, snd_una: 2008592799 snd_nxt: 2008592856 snd_wnd: 66560 rcv_nxt: 
186103918 rcv_adv: 186120302, hold timer 0
Feb 18 20:43:53  j-6350 rpd[1009]: bgp_hold_timeout:3643: NOTIFICATION sent to 
167.671 (External AS BBCC): code 4 (Hold Timer Expired Error), Reason: holdtime 
expired for 167.671 (External AS BBCC), socket buffer sndcc: 57 rcvcc: 0 TCP 
state: 4, snd_una: 251656026 snd_nxt: 251656083 snd_wnd: 66560 rcv_nxt: 
709348506 rcv_adv: 709364890, hold timer 0
Feb 18 20:44:27  j-6350 mib2d[1008]: LIBJSNMP_NS_LOG_WARNING: WARNING: AgentX 
master agent failed to respond to ping.  Attempting to re-register.
Feb 18 20:44:27  j-6350 mib2d[1008]: LIBJSNMP_NS_LOG_INFO: INFO: 
ns_subagent_open_session: NET-SNMP version 5.3.1 AgentX subagent connected
Feb 18 20:44:32  j-6350 rpd[1009]: bgp_recv: peer 567.890 (External AS BBCC): 
received unexpected EOF
Feb 18 20:44:39  j-6350 rpd[1009]: bgp_recv: peer 987.655 (External AS BBCC): 
received unexpected EOFFeb 18 20:44:41  j-6350 rpd[1009]: bgp_process_caps: 
mismatch NLRI with 13.49 (External AS 1234): peer: inet-unicast 
inet-multicast(3) us: inet-unicast(1)
Feb 18 20:45:00  j-6350 cron[82794]: (root) CMD (newsyslog)
Feb 18 20:45:18  j-6350 rpd[1009]: bgp_process_caps: mismatch NLRI with 952.268 
(External AS 7234): peer: inet-unicast inet-multicast(3) us: inet-unicast(1)
Feb 18 20:46:26  j-6350 rpd[1009]: bgp_hold_timeout:3643: NOTIFICATION sent to 
25.24 (External AS 987): code 4 (Hold Timer Expired Error), Reason: holdtime 
expired for 25.24 (External AS 34209), socket buffer sndcc: 57 rcvcc: 0 TCP 
state: 4, snd_una: 2298471556 snd_nxt: 2298471575 snd_wnd: 16210 rcv_nxt: 
820540233 rcv_adv: 820556617, hold timer 0
Feb 18 20:46:33  j-6350 snmpd[1007]: SNMPD_HEALTH_MON_INSTANCE: Health Monitor: 
jroute daemon memory usage (Management process): new instance detected 
(variable: sysApplElmtRunMemory.5.6.82787)
Feb 18 20:46:33  j-6350 snmpd[1007]: SNMPD_HEALTH_MON_INSTANCE: Health Monitor: 
jroute daemon memory usage (Command-line interface): new instance detected 
(variable: sysApplElmtRunMemory.5.8.82786)

After a few minutes everything went back to normal. 
Memory and CPU usage look fine:

show chassis routing-engine  
Routing Engine status:
Temperature 21 degrees C / 69 degrees F
CPU temperature 43 degrees C / 109 degrees F
Total memory  2048 MB Max  1126 MB used ( 55 percent)
  Control plane memory1472 MB Max   707 MB used ( 48 percent)
  Data plane memory576 MB Max   426 MB used ( 74 percent)
CPU utilization:
  User   1 percent
  Real-time threads  9 percent
  Kernel 0 percent
  Idle  90 percent


What might be the issue here? Thanks in advance for any help.

All best,
mjb

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 Redundant RE problem

2012-02-18 Thread Mohammad
Hi All

Thank you for your support, most probably what we are gonna do is:
- try turning GRES/NSR on/off
- upgrade to 10.4R8.5 or 10.4R9
Currently we are waiting JTAC response.
I'll let you once it is solved.

Thank you again
Mohammad Salbad

-Original Message-
From: Stefan Fouant [mailto:sfou...@shortestpathfirst.net] 
Sent: Wednesday, February 15, 2012 11:08 PM
To: Daniel Roesen
Cc: Morgan McLean; juniper-nsp@puck.nether.net; Mohammad
Subject: Re: MX960 Redundant RE problem

I was referring more to a bug in hardware... Bad memory, etc.

Stefan Fouant
JNCIE-SEC, JNCIE-SP, JNCIE-ER, JNCI
Technical Trainer, Juniper Networks

Follow us on Twitter @JuniperEducate

Sent from my iPad

On Feb 15, 2012, at 1:56 PM, Daniel Roesen d...@cluenet.de wrote:

 On Wed, Feb 15, 2012 at 12:24:50PM -0500, Stefan Fouant wrote:
 The cool thing is the Backup RE is actually listening to all the 
 control plane messages coming on fxp1 destined for the Master RE and 
 formulating it's own decisions, running its own Dijkstra, BGP Path 
 Selection, etc. This is a preferred approach as opposed to simply 
 mirroring routing state from the Primary to the Backup is because it 
 eliminates fate sharing where there may be a bug on the Primary RE, 
 we don't want to create a carbon copy of that on the Backup.
 
 I don't really buy that argument. Running the same code with the same 
 algorithm against the same data usually leads to the same results.
 You'll get full bug redundancy - I'd expect RE crashing simultaneously.
 Did NSR protect from any of the recent BGP bugs?
 
 The advantage I see are less impacting failovers in case of a) 
 hardware failures of active RE, or b) data structure corruption 
 happening on both REs [same code = same bugs], but eventually leading 
 to a crash of the active RE sooner than on the backup RE, or c) race 
 conditions being triggered sufficiently differently timing-wise so 
 only active RE crashes.
 
 Am I missing something?
 
 Best regards,
 Daniel
 
 --
 CLUE-RIPE -- Jabber: d...@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] How to setup jflow sensor for Juniper M320 Router with PRTG?

2012-02-18 Thread hani ibrahim
Dear All,

kindly i need to know how to install Jflow on M320 router RE-1600 ? and
dose PRTG is supported ?

Appreciate you help and support

BR,
Hany
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp