I'll work on that. The maven plugin is very interesting.
Thai
On Wed, Jun 2, 2021 at 10:12 PM Justin Bertram wrote:
> Could you also provide the steps you followed to reproduce the issue? It's
> important to not only have the code & configuration resources but the exact
> procedure you followed
Could you also provide the steps you followed to reproduce the issue? It's
important to not only have the code & configuration resources but the exact
procedure you followed so we are comparing apples to apples, as it were.
An automated process would be ideal here as that drastically speeds up the
Hi Justtin,
I have a simple project that can be used to reproduce the redistribution
issue at https://github.com/lnthai2002/SimpleArtemisClient.
Hope you have sometime to take a look,
Thai
On Wed, Jun 2, 2021 at 1:07 PM Thai Le wrote:
> Hi Justtin,
>
> I am still working on the JMS queue for
Hi Justtin,
I am still working on the JMS queue for now. I set the
connection-ttl-override = 6 explicitly on the server:
...
6
Well with a JMS topic the most common use-case is with a non-durable
subscription which would obviously be removed (along with all the messages
in it) as soon as the consumer's connection died. Furthermore, the queues
which represent non-durable subscriptions are named by the broker with a
UUID so
Also,
I am testing jms queue at the moment but my use case also include proving a
topic. Would the topic redistribute message differently ?
Thai
On Tue, Jun 1, 2021, 22:30 Justin Bertram wrote:
> Thanks for the clarification.
>
> Did you wait for the connection TTL to elapse before looking for
I did wait for more than 30 min and I checked the web console of the old
Artemis node, the number of consumer was 0 while the count on the other
node is 1. At some point, I saw the log of the old one print something like
"clean up resource ..." but the queue still have 7 messages.
I'll try to redu
Thanks for the clarification.
Did you wait for the connection TTL to elapse before looking for
redistribution? Given your description, the consumer was terminated before
it properly closed its connection so the broker would still think the
consumer was active and therefore wouldn't redistribute an
Hi Justin,
It is not the same question. The question posted on stackiverflow is about
the case where one of the broker crashes and comes back. This question is
about the message consumer/queue listener dies and come back.
A few weeks back I was able to make this work on a cluster with 3 master
an
Isn't this essentially the same question you asked on Stack Overflow [1]?
If so, why are you asking it again here when you have marked the answer as
correct. If not, please elaborate as to how the two use-cases differ.
Thanks!
Justin
[1]
https://stackoverflow.com/questions/67644488/activemq-arte
Hello guys,
I have a cluster of 2 Artemis brokers (2.17.0) without HA running in
kubernetes. They are configured with redistribution-delay=0 but when the
consumer dies and comes back it connects to the other Artemis node but
redistribution of left over messages from the previous Artemis node does
11 matches
Mail list logo