Looks like a good way of checking it out. I'm in favor of committing to trunk.

On 1/19/2017 7:26 PM, Peter wrote:
Thanks Shawn & Dan for reviewing,

I'm happy to commit that to trunk now using lazy concensus.

Pat, do you feel about this as a user review process?

Regards,

Peter.

Sent from my Samsung device.

  Include original message
---- Original message ----
From: Dan Rollo (JIRA) <j...@apache.org>
Sent: 20/01/2017 01:29:26 am
To: comm...@river.apache.org
Subject: [jira] [Commented] (RIVER-447) Leaked Executor Service Threads in 
LoadClass


    [ 
https://issues.apache.org/jira/browse/RIVER-447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15830100#comment-15830100
 ]

Dan Rollo commented on RIVER-447:
---------------------------------

The 'River-447.patch' looks good.

 Leaked Executor Service Threads in LoadClass
 --------------------------------------------

                 Key: RIVER-447
                 URL: https://issues.apache.org/jira/browse/RIVER-447
             Project: River
          Issue Type: Bug
          Components: net_jini_loader
    Affects Versions: River_3.0.0
         Environment: Linux with either JDK 1.7 or 1.8
            Reporter: Shawn Ellis
              Labels: PreferredClassLoader, leaks, threads
         Attachments: ExecutorShutdown.patch, River-447.patch


 I am seeing an overall thread usage increase when using Apache River 3.0. I'm 
able to reproduce the problem with both JDK 1.7 and 1.8. The issue is that 
LoadClass makes use of a loaderMap that contains an Executor Service. After 10 
seconds, the loaderMap will garbage collect the Executor Service, but the 
Executor Service will not be shutdown. This leaves the Executor Service thread 
still running and waiting for work.
      How to Reproduce:
      1. Start up an Apache River 3.0 instance
      2. Have a client connect to the River instance
      3. Wait 10 seconds
      4. Have the client connect to the River instance a second time. The number
         of threads will have increased.
      The leaked threads have a stack trace similar to the one below.
        
"net.jini.loader.pref.PreferredClassLoader@7af8260a["httpmd://10.0.1.5:9070/reggie-dl.jar;sha=6c5b83e0caec74d5d4226dcd2c2311d29e81ac0a
 
httpmd://10.0.1.5:9070/jsk-dl.jar;sha=002bca7b77431ba20385d7ca5be8fa8ec1124a01"]_thread-0"
 #30149 prio=5 os_prio=0 tid=0x00003fff68f79000 nid=0x5db9 waiting on condition [0x00003ffdc344d000]
           java.lang.Thread.State: WAITING (parking)
                at sun.misc.Unsafe.park(Native Method)
                - parking to wait for  <0x00000000f2955ff0> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
                at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
                at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
                at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
                at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
                at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
                at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                at java.lang.Threadrun(Thread.java:745)



Reply via email to