[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2013-03-21 Thread reynald.bo...@gmail.com (JIRA)














































Reynald Borer
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















Hi,

I'm also experiencing this issue with Jenkins version 1.506 on the following environment:

	Jenkins 1.506
	Debian Squeeze 64 bits (Linux buildsrv1 3.2.0-0.bpo.2-amd64)
	Java(TM) SE Runtime Environment (build 1.6.0_37-b06)






























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira







-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2013-02-27 Thread uwe+jenk...@bsdx.de (JIRA)














































Uwe Stuehler
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















Could this be related to the thread-safety (or not) of java.util.zip?

https://forums.oracle.com/forums/thread.jspa?messageID=4627097

In GNU Classpath, java.util.zip.Deflater is clearly documented as not thread-safe due to limitations of the API.  I couldn't find a mention of the thread-safety issue in Sun/Oracle documentation, but some have concluded that their implementation isn't thread-safe either.



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira







-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2013-02-27 Thread jos...@java.net (JIRA)














































Jose Sa
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















I've added that entry to my startup script 2 weeks ago and also haven't experienced 100% cpu symptom since then.



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira







-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2013-02-27 Thread jan.ho...@heidelberg.com (JIRA)














































Jan Hoppe
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















Hi,

we have been running into these troubles often the last time.
I found a "disable" flag for "reportException". 
Add -Dorg.kohsuke.stapler.compression.CompressionFilter.disabled=true to your VM-Options in jenkins xml!

Good luck,
Jan



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira







-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2013-02-06 Thread ricko2...@att.net (JIRA)














































Richard Otter
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















On my system, just upgraded to 1.492 from the last LTS 1.480.1, this happens whenever I display an EMMA code coverage report. 
The server java process takes upwards of 80% CPU for at least 20 min until I use the method above to kill the "Active Request" in the Monitoring plugin.
I've seen this behavior for months, but this jira thread shows me how to kill the thread. That's progress.



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira







-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2013-01-30 Thread bruce.e...@gmail.com (JIRA)














































Bruce Edge
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















I have the same problem. It triggers soft lockups in the kernel of the jenkins host:

 901 Jan 30 02:31:58 build kernel: [589969.228931] INFO: rcu_sched detected stall on CPU 3 (t=57719 jiffies)   
 902 Jan 30 02:31:58 build kernel: [589969.228939] INFO: rcu_sched detected stall on CPU 2 (t=57719 jiffies)   
 903 Jan 30 02:31:58 build kernel: [589969.228929] INFO: rcu_sched detected stall on CPU 1 (t=57719 jiffies)   
 904 Jan 30 02:31:58 build kernel: [589969.228944] INFO: rcu_sched detected stall on CPU 4 (t=57719 jiffies)   
 905 Jan 30 02:31:58 build kernel: [589969.228939] sending NMI to all CPUs: 

Note the timestamp correlation between the exception below and the stalls above.

712 Jan 30, 2013 2:31:58 AM org.kohsuke.stapler.compression.CompressionFilter reportException
6713 WARNING: Untrapped servlet exception
6714 winstone.ClientSocketException: Failed to write to client
6715 at winstone.ClientOutputStream.write(ClientOutputStream.java:41) 
6716 at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
6717 at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
6718 at winstone.WinstoneOutputStream.commit(WinstoneOutputStream.java:165)
6719 at winstone.WinstoneOutputStream.flush(WinstoneOutputStream.java:217)
6720 at winstone.WinstoneOutputStream.close(WinstoneOutputStream.java:227)
6721 at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:241)   
6722 at org.kohsuke.stapler.compression.FilterServletOutputStream.close(FilterServletOutputStream.java:36)
6723 at net.bull.javamelody.FilterServletOutputStream.close(FilterServletOutputStream.java:46)   

This happens most nights at the same time.



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira







-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2012-12-13 Thread jos...@java.net (JIRA)














































Jose Sa
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















I still have this problem occurring at least twice a week causing cpu at 400% (on a 4 CPUs machine) that forces me to restart the server. 

When it happens I can workaround it (temporarily) by using the 'monitoring' plugin, going to the list of threads and sort the table by execution time and kill the threads actively running "DeflaterOutputStream.deflate" that are on top of the table. This makes CPU drop back to normal values but eventually a restart will be needed at the end of the day.

Here is a stack trace collected from our logs:

WARNING: Untrapped servlet exception
winstone.ClientSocketException: Failed to write to client
at winstone.ClientOutputStream.write(ClientOutputStream.java:41)
at winstone.WinstoneOutputStream.commit(WinstoneOutputStream.java:181)
at winstone.WinstoneOutputStream.commit(WinstoneOutputStream.java:119)
at winstone.WinstoneOutputStream.write(WinstoneOutputStream.java:112)
at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:169)
at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:238)
at org.kohsuke.stapler.compression.FilterServletOutputStream.close(FilterServletOutputStream.java:36)
at net.bull.javamelody.FilterServletOutputStream.close(FilterServletOutputStream.java:46)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:320)
at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:149)
at java.io.OutputStreamWriter.close(OutputStreamWriter.java:233)
at java.io.BufferedWriter.close(BufferedWriter.java:266)
at org.dom4j.io.XMLWriter.close(XMLWriter.java:286)
at org.kohsuke.stapler.jelly.HTMLWriterOutput.close(HTMLWriterOutput.java:70)
at org.kohsuke.stapler.jelly.DefaultScriptInvoker.invokeScript(DefaultScriptInvoker.java:56)
at org.kohsuke.stapler.jelly.JellyClassTearOff.serveIndexJelly(JellyClassTearOff.java:107)
at org.kohsuke.stapler.jelly.JellyFacet.handleIndexRequest(JellyFacet.java:127)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:563)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:659)
at org.kohsuke.stapler.MetaClass$6.doDispatch(MetaClass.java:241)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:53)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:574)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:659)
at org.kohsuke.stapler.MetaClass$12.dispatch(MetaClass.java:384)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:574)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:659)
at org.kohsuke.stapler.MetaClass$4.doDispatch(MetaClass.java:203)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:53)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:574)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:659)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:488)
at org.kohsuke.stapler.Stapler.service(Stapler.java:162)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:45)
at winstone.ServletConfiguration.execute(ServletConfiguration.java:248)
at winstone.RequestDispatcher.forward(RequestDispatcher.java:333)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:376)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:95)
at hudson.plugins.greenballs.GreenBallFilter.doFilter(GreenBallFilter.java:58)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:98)
at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:206)
at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:179)
at net.bull.javamelody.PluginMonitoringFilter.doFilter(PluginMonitoringFilter.java:86)
at org.jvnet.hudson.plugins.monitoring.HudsonMonitoringFilter.doFilter(HudsonMonitoringFilter.java:84)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:98)
at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:87)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
at winstone.RequestDispatcher.doFilter(Reque

[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2012-12-06 Thread robert_lloyd_1...@yahoo.com (JIRA)












































  
Bob Lloyd
 edited a comment on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException
















I have the same issue, but cannot track it to a specific job.  I am not generating Performance reports (to my knowledge, unless it's happening w/o my intention).  I've attached my log below (though, it's pretty much the same as above).  I'm running with Jenkins 1.491 with Sun JDK 1.6.0_26

This happens for me after about 36 hours of up-time for Jenkins.  I have ~30 jobs running on 6 servers.  One server runs jobs pretty much constantly, while other servers run much less frequently.


"RequestHandlerThread541" daemon prio=6 tid=0x4959d800 nid=0x8c4 runnable [0x4b40f000]
   java.lang.Thread.State: RUNNABLE
	at java.util.zip.Deflater.deflateBytes(Native Method)
	at java.util.zip.Deflater.deflate(Unknown Source)

	locked <0x19974c70> (a java.util.zip.ZStreamRef)
	at java.util.zip.DeflaterOutputStream.deflate(Unknown Source)
	at java.util.zip.DeflaterOutputStream.write(Unknown Source)
	at java.util.zip.GZIPOutputStream.write(Unknown Source)
	locked <0x19974c80> (a java.util.zip.GZIPOutputStream)
	at org.kohsuke.stapler.compression.FilterServletOutputStream.write(FilterServletOutputStream.java:31)
	at sun.nio.cs.StreamEncoder.writeBytes(Unknown Source)
	at sun.nio.cs.StreamEncoder.implClose(Unknown Source)
	at sun.nio.cs.StreamEncoder.close(Unknown Source)
	locked <0x19976ce0> (a java.io.OutputStreamWriter)
	at java.io.OutputStreamWriter.close(Unknown Source)
	at java.io.PrintWriter.close(Unknown Source)
	locked <0x19976ce0> (a java.io.OutputStreamWriter)
	at org.kohsuke.stapler.compression.CompressionFilter.reportException(CompressionFilter.java:77)
	at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:53)
	at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
	at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
	at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:81)
	at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
	at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
	at winstone.RequestDispatcher.forward(RequestDispatcher.java:331)
	at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:215)
	at winstone.RequestHandlerThread.run(RequestHandlerThread.java:138)
	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
	at java.util.concurrent.FutureTask.run(Unknown Source)
	at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.lang.Thread.run(Unknown Source)



   Locked ownable synchronizers:

	<0x19973bf8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)





























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira






[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2012-12-06 Thread robert_lloyd_1...@yahoo.com (JIRA)














































Bob Lloyd
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















I have the same issue, but cannot track it to a specific job.  I am not generating Performance reports (to my knowledge, unless it's happening w/o my intention).  I've attached my log below (though, it's pretty much the same as above).

This happens for me after about 36 hours of up-time for Jenkins.  I have ~30 jobs running on 6 servers.  One server runs jobs pretty much constantly, while other servers run much less frequently.


"RequestHandlerThread541" daemon prio=6 tid=0x4959d800 nid=0x8c4 runnable [0x4b40f000]
   java.lang.Thread.State: RUNNABLE
	at java.util.zip.Deflater.deflateBytes(Native Method)
	at java.util.zip.Deflater.deflate(Unknown Source)

	locked <0x19974c70> (a java.util.zip.ZStreamRef)
	at java.util.zip.DeflaterOutputStream.deflate(Unknown Source)
	at java.util.zip.DeflaterOutputStream.write(Unknown Source)
	at java.util.zip.GZIPOutputStream.write(Unknown Source)
	locked <0x19974c80> (a java.util.zip.GZIPOutputStream)
	at org.kohsuke.stapler.compression.FilterServletOutputStream.write(FilterServletOutputStream.java:31)
	at sun.nio.cs.StreamEncoder.writeBytes(Unknown Source)
	at sun.nio.cs.StreamEncoder.implClose(Unknown Source)
	at sun.nio.cs.StreamEncoder.close(Unknown Source)
	locked <0x19976ce0> (a java.io.OutputStreamWriter)
	at java.io.OutputStreamWriter.close(Unknown Source)
	at java.io.PrintWriter.close(Unknown Source)
	locked <0x19976ce0> (a java.io.OutputStreamWriter)
	at org.kohsuke.stapler.compression.CompressionFilter.reportException(CompressionFilter.java:77)
	at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:53)
	at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
	at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
	at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:81)
	at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
	at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
	at winstone.RequestDispatcher.forward(RequestDispatcher.java:331)
	at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:215)
	at winstone.RequestHandlerThread.run(RequestHandlerThread.java:138)
	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
	at java.util.concurrent.FutureTask.run(Unknown Source)
	at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.lang.Thread.run(Unknown Source)



   Locked ownable synchronizers:

	<0x19973bf8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)





























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira






[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2012-11-06 Thread david.pars...@gmail.com (JIRA)












































  
David Pärsson
 edited a comment on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException
















I think I have seen this issue (or a very similar one) when trying to generate Performance reports for huge JMeter reports. Jenkins v1.471 and Performance plugin v1.8.



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira






[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2012-11-06 Thread david.pars...@gmail.com (JIRA)














































David Pärsson
 commented on  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















I have seen this issue (or a very similar one) when trying to generate Performance reports for huge JMeter reports. Jenkins v1.471 and Performance plugin v1.8.



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira






[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2012-07-09 Thread chris+j...@aptivate.org (JIRA)














































Chris Wilson
 updated  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException
















Change By:


Chris Wilson
(09/Jul/12 4:33 PM)




Attachment:


jenkins.stacktrace.1





Attachment:


jenkins.stacktrace.2





Attachment:


jenkins.stacktrace.3



























This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira






[JIRA] (JENKINS-14362) 100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException

2012-07-09 Thread chris+j...@aptivate.org (JIRA)














































Chris Wilson
 created  JENKINS-14362


100% CPU load during org.kohsuke.stapler.compression.CompressionFilter.reportException















Issue Type:


Bug



Affects Versions:


current



Assignee:


Unassigned


Components:


core



Created:


09/Jul/12 4:31 PM



Description:


Jenkins starts using 100% CPU after a few days. Using jstack I see several threads trying to write compressed output, and apparently not changing over time:


	java.util.zip.Deflater.deflate(byte[], int, int) @bci=55, line=322 (Compiled frame; information may be imprecise)
	java.util.zip.DeflaterOutputStream.deflate() @bci=14, line=176 (Compiled frame)
	java.util.zip.DeflaterOutputStream.write(byte[], int, int) @bci=108, line=135 (Compiled frame)
	java.util.zip.GZIPOutputStream.write(byte[], int, int) @bci=4, line=89 (Compiled frame)
	org.kohsuke.stapler.compression.FilterServletOutputStream.write(byte[], int, int) @bci=7, line=31 (Compiled frame)
	sun.nio.cs.StreamEncoder.writeBytes() @bci=120, line=220 (Interpreted frame)
	sun.nio.cs.StreamEncoder.implClose() @bci=84, line=315 (Interpreted frame)
	sun.nio.cs.StreamEncoder.close() @bci=18, line=148 (Interpreted frame)
	java.io.OutputStreamWriter.close() @bci=4, line=233 (Interpreted frame)
	java.io.PrintWriter.close() @bci=21, line=312 (Interpreted frame)
	org.kohsuke.stapler.compression.CompressionFilter.reportException(java.lang.Exception, javax.servlet.http.HttpServletResponse) @bci=112, line=77 (Interpreted frame)
	org.kohsuke.stapler.compression.CompressionFilter.doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.FilterChain) @bci=57, line=53 (Compiled frame)
	winstone.FilterConfiguration.execute(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.FilterChain) @bci=25, line=194 (Compiled frame)
	winstone.RequestDispatcher.doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse) @bci=48, line=366 (Compiled frame)
	hudson.util.CharacterEncodingFilter.doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.FilterChain) @bci=43, line=81 (Compiled frame)
	winstone.FilterConfiguration.execute(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.FilterChain) @bci=25, line=194 (Compiled frame)
	winstone.RequestDispatcher.doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse) @bci=48, line=366 (Compiled frame)
	winstone.RequestDispatcher.forward(javax.servlet.ServletRequest, javax.servlet.ServletResponse) @bci=483, line=331 (Compiled frame)
	winstone.RequestHandlerThread.processRequest(winstone.WebAppConfiguration, winstone.WinstoneRequest, winstone.WinstoneResponse, java.lang.String) @bci=38, line=215 (Compiled frame)
	winstone.RequestHandlerThread.run() @bci=631, line=138 (Compiled frame)
	java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=471 (Interpreted frame)
	java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 (Interpreted frame)
	java.util.concurrent.FutureTask.run() @bci=4, line=166 (Interpreted frame)
	winstone.BoundedExecutorService$1.run() @bci=4, line=77 (Compiled frame)
	java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) @bci=46, line=1110 (Compiled frame)
	java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603 (Interpreted frame)
	java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)



I'm suspecting but not 100% sure that these threads are in an infinite loop (livelocked). I'm struggling to see what other threads might be doing this.

This JVM was not started with debugging enabled to attach a debugger for analysis. I've enabled it now. Stack traces attached as files below.




Environment:


[chris@fen-vz-jenkins ~]$ java -version

java version "1.6.0_22"

OpenJDK Runtime Environment (IcedTea6 1.10.8) (rhel-1.27.1.10.8.el5_8-i386)

OpenJDK Server VM (build 20.0-b11, mixed mode)



[chris@fen-vz-jenkins ~]$ uname -a

Linux fen-vz-jenkins.fen.aptivate.org 2.6.32-7-pve #1 SMP Mon