Hi All,
I've got an odd situation in that my Jenkins server after a certain mount 
of time seems to start consuming large amounts of CPU.  This seems to start 
to happen every 45 to 60 days.  When the CPU usage spikes the Jenkins 
service doesn't come to a halt, the jobs build times don't seem to be 
overly impacted.  We do see an impact (latency) in things like opening 
config pages, authenticating logins, navigating through the builds. 
 Restarting my Jenkins service at this point gets things back to normal.

What is the best way to approach debugging this?  I've tried using jstack 
and tracking down process IDs (converted to hex) but that didn't seem to 
work at all.  I'm sure I'm doing something wrong there...  I've looked at 
the jenkins tread dump.  Again, I'm having the same issue of not being able 
to track my offending PIDs to the actual thread dump.  Its actually been 
pretty hard to debug this since it occurs so infrequently, I can only let 
the race condition continue for so long before I have to restart the 
service...

My guess here is that I've got a plugin misbehaving or something like that. 
 I'd really like to track it down though.  I'm not against the concept of 
setting up a restart window monthly for Jenkins, but I'd rather avoid that 
if possible.  If you have any tips or tricks, I'd love to hear it.  I'm 
probably not going to see the condition again until sometime in August, but 
I'd like to get a debugging procedure in place for when it happens again.

cheers
Matt

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/a41a8bcf-888e-4202-a837-e51cc18ce660%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to