Ah. Great, thanks Charles! :-) philip --
Anderson, Charles R wrote on 27/3/19 11:16 : > Yes. It was fixed in a later release. Perhaps try 16.2R2-S8 if you don't > want to change to a later version. > > On Wed, Mar 27, 2019 at 10:59:09AM +1000, Philip Smith wrote: >> Hi everyone, >> >> Has anyone seen anything like this before? Searching on Google etc has >> revealed nothing. >> >> For a few months now, MX240s running 16.2R2.8 have showed one of the RE >> CPUs running at around 90%. It has not affected the MX80s in the same >> network running the same OS and having essentially the same config >> (simple dual stack network, IS-IS/BGP, that's about it). >> >> And none of the 240s started doing this at exactly the same time. And >> there are no configuration changes around the times the CPU jumped. >> >> Here is an example: >> >> philip@TCR> show system processes extensive >> last pid: 33210; load averages: 1.23, 1.15, 1.14 up 343+02:08:01 >> 06:42:04 >> 144 processes: 5 running, 138 sleeping, 1 waiting >> >> Mem: 284M Active, 1474M Inact, 183M Wired, 12M Cache, 91M Buf, 32M Free >> Swap: 4096M Total, 319M Used, 3777M Free, 7% Inuse >> >> >> PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND >> 11473 root 1 52 0 724M 6048K piperd 3408.8 79.05% python >> 10 root 1 155 ki31 0K 12K RUN 2883.3 2.20% idle >> 11367 root 2 -26 r26 813M 8904K nanslp 182.5H 1.66% chassisd >> 11825 root 3 20 0 903M 39936K kqread 36.9H 0.49% rpd >> 11387 root 1 20 0 784M 29580K select 119.3H 0.39% mib2d >> 11863 root 1 20 0 738M 10084K select 76.4H 0.29% snmpd >> >> >> Almost 80% caused by python. I'm not doing any automation (that I've >> knowingly set up). >> >> Jumping into a shell, I see this: >> >> philip@TCR> start shell >> % ps ax | grep python >> 11473 - R 204530:44.49 /usr/bin/python /usr/libexec/icmd/icmd.py >> 33362 0 S+ 0:00.00 grep python >> >> ICMD according to the docs is the "internal communication health monitor >> daemon". >> >> Weirdly, I only see the ICMD running on the MX240s, not the MX80s. Even >> though the docs suggest it applies to all MX. >> https://www.juniper.net/documentation/en_US/junos/information-products/topic-collections/release-notes/16.2/topic-113675.html >> >> Any clues at all? >> >> Is this just a reboot to make it go away? (Although one has been >> rebooted recently, and after about two weeks of uptime, the CPU jumped >> up to 85% and has stayed there.) >> >> Thanks! >> >> philip > _______________________________________________ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp > _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp