Re: [j-nsp] Ex8208 TRAP

2018-05-20 Thread Aaron Gould
Guessing, 2:32 a.m. link 7 sfp issue or far side sfp issue or cable issue ??

Aaron

> On May 20, 2018, at 12:44 AM, Mohammed Abu Sultan  wrote:
> 
> Hi All,
> 
> 
> I have a strange issue in my Ex 8208 Core switch that a suddenly trap causing 
> a network down when I check the logs I found
> 
> 
> Core-SW-8208> show log messages
> 
> May 19 20:42:52  Core-SW-8208 chassisd[1338]: CHASSISD_SNMP_TRAP10: SNMP trap 
> generated: FRU power on (jnxFruContentsIndex 8, jnxFruL1Index 1, 
> jnxFruL2Index 1, jnxFruL3Index 0, jnxFruName PIC: 8x 10GE SFP+ @ 0/0/*, 
> jnxFruType 11, jnxFruSlot 0, jnxFruOfflineReason 2, jnxFruLastPowerOff 
> 20675705, jnxFruLastPowerOn 20686688)
> May 19 20:42:53  Core-SW-8208 /kernel: pic_listener_connect: conn 
> established: mgmt addr=0x8110,
> May 19 20:42:52  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/0
> May 19 20:42:53  Core-SW-8208 /kernel: kernel overwrite ae0 link-speed with 
> child speed 100
> May 19 20:42:53  Core-SW-8208 /kernel: drv_ge_misc_handler: ifd:148  new 
> address:b0:c6:9a:c9:ae:03
> May 19 20:42:53  Core-SW-8208 /kernel: drv_ge_misc_handler: ifd:149  new 
> address:b0:c6:9a:c9:ae:03
> May 19 20:42:52  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/1
> May 19 20:42:53  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/2
> May 19 20:42:53  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/3
> May 19 20:42:53  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/4
> May 19 20:42:53  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/5
> May 19 20:42:53  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/6
> May 19 20:42:53  Core-SW-8208 chassisd[1338]: CHASSISD_IFDEV_CREATE_NOTICE: 
> create_pics: created interface device for xe-0/0/7
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 2 SFP 
> receive power low  alarm set
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 2 SFP 
> receive power low  warning set
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 3 SFP 
> receive power low  alarm set
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 3 SFP 
> receive power low  warning set
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 7 SFP laser 
> bias current low  alarm set
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 7 SFP output 
> power low  alarm set
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 7 SFP laser 
> bias current low  warning set
> Jun 14 02:32:18  Core-SW-8208 member0-fpc0  chassism[118]:  link 7 SFP output 
> power low  warning set
> May 19 20:42:53  Core-SW-8208 member0-fpc0 pfe_pme_max 24
> May 19 20:42:53  Core-SW-8208 member0-fpc0 
> PFE_L2,mrvl_brg_port_stg_entry_set():7472:Received MSTI-default-rt, not 
> expected
> May 19 20:42:54  Core-SW-8208 member0-fpc0 Error: Adding filter(block_pvst) 
> cntr(pvst) plcr((null)) index(1) entry
> May 19 20:42:52  Core-SW-8208 member0-fpc1  chassism[120]: 
> ccm_fm_hsl2_sid_plane_ctl_ack: FPC SID_PLANE_CTL_ACK fpc 1 sent; err=0
> May 19 20:44:24  Core-SW2-8208 member0-fpc0  chassism[118]: CM: 
> cm_hcm_pfem_resync_done: fpc_slot: 0
> May 19 20:44:24  Core-SW-8208 member0-fpc0  chassism[118]: CM: 
> cm_hcm_pfem_resync_done: FPC PFEM resync_done fpc 0 sent; err:0
> 
> 
> as per your experience what is this issue and why it cause also how to fix it.
> 
> 
> appreciate your support in advanced
> 
> 
> Regards,
> 
> Mohammed
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5100 System Process SNMP monitor

2018-05-20 Thread Chris Lee via juniper-nsp
Hi Dale

Thanks you are spot on, I ended up finding hrSystemProcesses inside the
rfc2790a MIB and was able to get exactly what I needed into PRTG

Thanks
Chris

On Sun, 20 May 2018 at 18:48, Dale Shaw  wrote:

> Hi Chris,
>
>
> On Sun, 20 May 2018 at 1:01 pm, Chris Lee via juniper-nsp <
> juniper-nsp@puck.nether.net> wrote:
> >
> > Hi all,
> >
> > We recently hit a jdhcpd bug in our QFX5100 VC (14.1X53-D30 release)
> which
> > looks to be from the number of defunct zombie processes increasing over
> > time leading up to an ungraceful failover of the routing engines.
> >
> > I have LibreNMS monitoring the QFX and it automagically graphs the
> running
> > process count, but I'm struggling to figure out an SNMP MIB object number
> > that gives me the same process count, as I'd like to monitor the same
> value
> > in our existing PRTG installation to send email/SMS alerts when certain
> > thresholds are reached until we can follow JTAC's recommendation to
> upgrade
> > the QFX release to D46.
> >
> > I've downloaded the MIB pack from Juniper and tried grepping the files
> for
> > "process" and "processes" but can't seem to find anything relevant.
> >
> > Anyone familiar with the SNMP on the QFX know where I might find total
> > system process count ?
>
> Junos implements HOST-RESOURCES-MIB, so you should be able to poll the
> "hrSystemProcesses" object (gauge). Maybe that's how LibreNMS is doing it?
>
> admin@gw> show snmp mib walk hrSystemProcesses
> hrSystemProcesses.0 = 125
>
> admin@gw> show system processes summary
> last pid: 40910;  load averages:  0.20,  0.19,  0.17  up 29+08:45:43
> 08:45:48
> 126 processes: 18 running, 95 sleeping, 1 zombie, 12 waiting
> [...]
>
> Cheers
> Dale
>
>
> >
> >
> > Thanks,
> > Chris
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] (no subject)

2018-05-20 Thread mohammad khalil via juniper-nsp

http://popular.aliveandwellinkansas.com

Mohammad Khalil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5100 System Process SNMP monitor

2018-05-20 Thread Dale Shaw
Hi Chris,

On Sun, 20 May 2018 at 1:01 pm, Chris Lee via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:
>
> Hi all,
>
> We recently hit a jdhcpd bug in our QFX5100 VC (14.1X53-D30 release) which
> looks to be from the number of defunct zombie processes increasing over
> time leading up to an ungraceful failover of the routing engines.
>
> I have LibreNMS monitoring the QFX and it automagically graphs the running
> process count, but I'm struggling to figure out an SNMP MIB object number
> that gives me the same process count, as I'd like to monitor the same
value
> in our existing PRTG installation to send email/SMS alerts when certain
> thresholds are reached until we can follow JTAC's recommendation to
upgrade
> the QFX release to D46.
>
> I've downloaded the MIB pack from Juniper and tried grepping the files for
> "process" and "processes" but can't seem to find anything relevant.
>
> Anyone familiar with the SNMP on the QFX know where I might find total
> system process count ?

Junos implements HOST-RESOURCES-MIB, so you should be able to poll the
"hrSystemProcesses" object (gauge). Maybe that's how LibreNMS is doing it?

admin@gw> show snmp mib walk hrSystemProcesses
hrSystemProcesses.0 = 125

admin@gw> show system processes summary
last pid: 40910;  load averages:  0.20,  0.19,  0.17  up 29+08:45:43
08:45:48
126 processes: 18 running, 95 sleeping, 1 zombie, 12 waiting
[...]

Cheers
Dale


>
>
> Thanks,
> Chris
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp