Hi, here is repost with images link

Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
0.98.6
The root problem is that user threads constantly grows. I do get thousands
of live threads on tomcat instance. Then it dies of course.

please see visualVM threads count dynamics
http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png


Please see selected thread. It should be related to zookeeper (because of
thread-name suffix SendThread)
http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png

The threaddump for this thread is:

"visit-thread-27799752116280271-EventThread" - Thread t@75
   java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <34671cea> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)

   Locked ownable synchronizers:
- None

Why does it live "forever"? I next 24 hours I would get ~1200 live theads.

"visit thread" does simple put/get by key, newrelic says it takes 30-40 ms
to respond.
I just set a name for the thread inside servlet method.

Here is CPU profiling result:
http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png

Here is zookeeper status:
http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png

How can I debug and get root cause for long living threads? Looks like I
got threads leaking, but have no Idea why...




2015-01-05 17:57 GMT+03:00 Ted Yu <yuzhih...@gmail.com>:

> I used gmail.
>
> Please consider using third party site where you can upload images.
>
>

Reply via email to