[ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985951#comment-16985951
 ] 

Howard Gao commented on ARTEMIS-2560:
-------------------------------------

I've investigated this problem so far I can see qpid-jms has the issue (along 
with python I believe). I think there is a problem the AMQPMessage internal 
properties not properly cleaned up. 
I'll see if this happens to STOMP protocols (and others).


> Duplicate messages created by cluster lead to OOME crash
> --------------------------------------------------------
>
>                 Key: ARTEMIS-2560
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: AMQP
>    Affects Versions: 2.10.1
>            Reporter: Mikko Niemi
>            Priority: Major
>         Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to