[ 
https://issues.apache.org/jira/browse/SLING-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13905610#comment-13905610
 ] 

Carsten Ziegeler commented on SLING-3383:
-----------------------------------------

Looking at the thread dump, there is no thread hanging in Sling eventing 
anymore, however the above entry is printed.
Does this mean that another bundle holds the class loader?

> Non stopping thread in AbstractJobQueue causes classloader leak 
> ----------------------------------------------------------------
>
>                 Key: SLING-3383
>                 URL: https://issues.apache.org/jira/browse/SLING-3383
>             Project: Sling
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: Event 3.3.4
>            Reporter: Chetan Mehrotra
>            Assignee: Carsten Ziegeler
>             Fix For: Event 3.3.6
>
>
> While analyzing heap dump for classloader leaks using script [1] following 
> possible leak was reported 
> {noformat}
>       org.apache.sling.event.impl.jobs.queues.AbstractJobQueue$1@0x12547e960
>        Following are few of the live paths found
>        Live path
>               
> org.apache.sling.event.impl.jobs.queues.AbstractJobQueue$1@0x12547e960
>               java.lang.Thread@0x12547e888
>               [Ljava.lang.Thread;@0x124f18f58
>               java.lang.ThreadGroup@0x123346e80
>               java.lang.Thread@0x126635c48
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x13161da50
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x1318253e0
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x1318253c0
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x1318253a0
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a128c90
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c198
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c178
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c158
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c138
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c118
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c0f8
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c0d8
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c0b8
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c098
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c078
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a24c058
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a007e30
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a007e10
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a007df0
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a007dd0
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a007db0
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a007d90
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@0x12a007d70
>               
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@0x125d2c968
>               java.lang.Thread@0x126637b00
>               java.util.concurrent.ThreadPoolExecutor$Worker@0x126637b68
>               java.util.HashMap$Entry@0x126640058
>               [Ljava.util.HashMap$Entry;@0x12663fde8
>               java.util.HashMap@0x125d2ca08
>               java.util.HashSet@0x125d2c9f8
>               java.util.concurrent.ThreadPoolExecutor@0x125d2c8a0
>               
> org.apache.sling.commons.threads.impl.DefaultThreadPool@0x125d2c7b0 [*]
>               
> org.apache.sling.commons.threads.impl.ThreadPoolFacade@0x125d2c798 [*]
>               
> org.apache.sling.commons.threads.impl.DefaultThreadPoolManager$Entry@0x125d535d0
>  [*]
>               java.util.HashMap$Entry@0x125d535b0
>               [Ljava.util.HashMap$Entry;@0x124147940
>               java.util.HashMap@0x124147910
>               
> org.apache.sling.commons.threads.impl.DefaultThreadPoolManager@0x124146e50 [*]
>               org.apache.felix.framework.ServiceRegistrationImpl@0x12414f3e8
>               org.apache.sling.commons.threads.impl.Activator@0x12414f3d0 [*]
>               org.apache.felix.framework.BundleImpl@0x122809f78
>               org.apache.felix.framework.ServiceRegistrationImpl@0x1241e8c40
>               class org.apache.sling.commons.threads.impl.WebConsolePrinter
>               
> org.apache.sling.commons.threads.impl.WebConsolePrinter@0x1241e8a70 [*]
> {noformat}
> It appears that the thread created in AbstractJobQueue [2] is not 
> interrupting the thread upon deactivation/shutdown. So the thread remains 
> blocked in waiting (in take method of various implementations) and never gets 
> a chance to check for the running flag. A better way would be to handle the 
> InterruptedException properly [3] and exit the thread as part of 
> InterruptedException exception handling
> [1] https://gist.github.com/chetanmeh/8860776 
> [2] 
> https://github.com/apache/sling/blob/trunk/bundles/extensions/event/src/main/java/org/apache/sling/event/impl/jobs/queues/AbstractJobQueue.java#L161
> [3] http://www.ibm.com/developerworks/java/library/j-jtp05236/index.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to