[ 
https://issues.apache.org/jira/browse/ARTEMIS-450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15490696#comment-15490696
 ] 

Josh Reagan commented on ARTEMIS-450:
-------------------------------------

This also seems to happen when consuming a large number of messages. Once the 
broker starts spitting out the "AMQ222174: Queue jms.queue.TEST.FOO, on 
address=jms.queue.TEST.FOO, is taking too long to flush deliveries. Watch out 
for frozen clients." message, it has to be restarted to get back to an 
operational state.

> Deadlocked broker
> -----------------
>
>                 Key: ARTEMIS-450
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-450
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: AMQP, Broker
>    Affects Versions: 1.2.0
>            Reporter: Gordon Sim
>         Attachments: stack-dump.txt, thread-dump-1.3.txt
>
>
> Not sure exactly how it came about, I noticed it on trying to shutdown the 
> broker. The log has:
> {noformat}
> 21:43:17,985 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:18,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:19,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:20,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:28,928 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:45,937 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.client] AMQ212037: 
> Connection failure has been detected: AMQ119014: Did not receive data from 
> /127.0.0.1:51232. It is likely the client has exited or crashed without 
> closing its connection, or the network between the server and client has 
> failed. You also might have configured connection-ttl and 
> client-failure-check-period incorrectly. Please check user manual for more 
> information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.server] AMQ222061: 
> Client connection failed, clearing up resources for session 
> ebd714e5-efad-11e5-83fc-fe540024bf8d
> Exception in thread "Thread-0 
> (ActiveMQ-AIO-poller-pool2081191879-2061347276)" java.lang.Error: 
> java.io.IOException: Error while submitting IO: Interrupted system call
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Error while submitting IO: Interrupted system 
> call
>       at org.apache.activemq.artemis.jlibaio.LibaioContext.blockedPoll(Native 
> Method)
>       at 
> org.apache.activemq.artemis.jlibaio.LibaioContext.poll(LibaioContext.java:360)
>       at 
> org.apache.activemq.artemis.core.io.aio.AIOSequentialFileFactory$PollerRunnable.run(AIOSequentialFileFactory.java:355)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       ... 2 more
> {noformat}
> I'll attach a thread dump in which you will see Thread-10 has locked the 
> handler lock in AbstractConnectionContext
> (part of the 'proton plug'), and is itself blocked on the lock in
> ServerConsumerImpl, which is held by Thread-21. Thread-21 is waiting
> for a write lock on the deliveryLock in ServerConsumerImpl. However
> Thread-20 already has a read lock on this, and is blocked (while
> holding the read lock) on the same handler lock within the proton plug
> (object 0x00000000f3d2bd90) that Thread-10 has locked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to