Broker 'loses' messages and leaks resources when handling duplicate connections
-------------------------------------------------------------------------------
Key: AMQ-889
URL: https://issues.apache.org/activemq/browse/AMQ-889
Project: ActiveMQ
Issue Type: Bug
Components: Broker
Affects Versions: 4.0.2
Reporter: John Heitmann
Attachments: doubleConsumer.patch.gz
A client that uses a transport like the failover transport can damage the
broker's ability to deliver messages to other clients and ultimately cause
broker resource leaks and message loss. I've found 4 issues starting on the
client and ending on the broker that could be improved to make the situation a
lot better. In this issue I've provided a patch for #3.
1) A failover client stores session metadata commands like ConsumerInfo in a
local state tracker. When failover occurs it replays these commands verbatim to
the newly connected-to broker. If the failover transport fails back to the
original broker it will replay the same commands with the same ids as it
already sent the broker. If the failover happens before the broker notices the
old connection has gone this can result in bad mojo. Clients should probably
regenerate session, consumer, and maybe connection ids.
2) When the broker detects a duplicate ClientId being sent to it it throws an
exception saying so, but this does not stop the broker from processing
subsequent messages on that connection. The broker should tear down the
connection immediately when it sees a client thrashing about.
3) When a broker receives a certain series of ConsumerInfo add and remove
commands with the same ConsumerId it leaks resources. One of the resources
leaked is the knowledge of lock owners on messages out in prefetch buffers.
This means those messages are stuck forever on the broker and can never be
retrieved and never be gc()ed. More below.
4) Messages locked and out in prefetch buffers have no broker-side timeout. If
a consumer is up, saying hello to the inactivity monitor, but otherwise doing
nothing then its messages are locked forever. The broker should have a janitor
that redrives stale messages. This seems like the hardest of the 4 to fix, but
is one of the most important.
More on #3: One bad sequence of events is:
1) Consumer 'c' connects to the broker over a failover transport.
2) c subscribes to a queue and addConsumer() gets called.
3) c fails away and then fails back.
4) c replays ConsumerInfo to the broker. addConsumer() gets called again and
overwrites subscription tracking from the first.
After this the broker will eventually get a double remove and there will be
noisy JMX complaints etc., but the serious problem has already occurred in step
4. My patch synchronizes the add step so that the broker is protected. The
individual client will still be a bit confused, and there will still be noise
when the second remove comes and JMX can't find the consumer to remove, but the
resource and message leaks are taken care of.
I'll file issues on the other 3 problems if they sound valid to you and aren't
already entered.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira