The logging I mentioned previously will hopefully shed some light on the
situation. It might even help you guys work up a reproducible test-case.

As for the impact, that explanation doesn't really cover exactly what's
happening (or isn't happening) in terms of messages and broker resources.
Those kinds of details are necessary for a meaningful analysis.


Justin

On Thu, May 30, 2024 at 4:23 PM William Crowell
<wcrow...@perforce.com.invalid> wrote:

> Justin,
>
> Thank you for the explanation.  Very helpful.  I strongly felt we could
> ignore this error as well, but here is the impact from what I am told:
>
> “Messages are not being distributed to other nodes in the cluster.  We
> have a feature when a call comes into one server, then we create a database
> entry.  That entry is then sent over Artemis to notify other server, so
> that call rings on users logged into respective servers.”
>
> Regards,
>
> William Crowell
>
> From: Justin Bertram <jbert...@apache.org>
> Date: Thursday, May 30, 2024 at 4:34 PM
> To: users@activemq.apache.org <users@activemq.apache.org>
> Subject: Re: AMQ222139
> Let me provide a little background on what's happening behind the scenes in
> this circumstance...
>
> When nodes are clustered together as they are in your case then they send
> "notification" messages to each other to inform them of important changes.
> For example, when a multicast queue (or a "local queue binding" as it is
> referred to internally) is created on an address that matches a
> cluster-connection then the node on which it is created sends a
> BINDING_ADDED notification message to all the other nodes in the cluster.
> The other nodes then add what's called a "remote queue binding" internally
> so that later if they receive a message on that same address they will know
> to send it across the cluster to the queue that was just created. This
> functionality is what supports cluster-wide pub/sub use-cases where, for
> example, a JMS or MQTT subscriber on one node will receive messages
> published on a completely different node in the cluster.
>
> In your specific case, a node is receiving a BINDING_ADDED notification for
> a remote binding which it has *already* created. Typically this is due to a
> misconfiguration as we've already discussed, but I suppose it's possible
> that there's some kind of race condition related to your use-case. You can
> turn on TRACE logging for the following categories to see details about
> notifications sent and received:
>
>  -
> org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionImpl
>  -
>
> org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl
>
> Based on the activity in the cluster this could be a significant amount of
> logging so I recommend you direct this logging to its own file. When you
> receive another AMQ222139 warning message in the log you can search the
> logs for the remote queue binding name to see who sent the notification and
> when it was received previously (and hopefully some details that will shed
> light on exactly what is happening).
>
> In any event, you still haven't explained what the impact of the AMQ222139
> warning has been on your use-case (if any). Can you clarify this point? My
> guess is that there's really no impact and that you can just ignore the
> warning.
>
>
> Justin
>
> On Thu, May 30, 2024 at 5:31 AM William Crowell
> <wcrow...@perforce.com.invalid> wrote:
>
> > More information here.  I did find out the queues were temporary which I
> > do not think this is supported in a clustered setup.
> >
> > I did notice when we browsed the queues of the three clustered nodes
> where
> > the address is same but names are different.  One of the nodes which has
> > two addresses which are temporary.  Would this cause a problem in the
> > distribution of messages?
> >
> > Regards,
> >
> > William Crowell
> >
> > From: William Crowell <wcrow...@perforce.com.INVALID>
> > Date: Tuesday, May 28, 2024 at 5:13 PM
> > To: users@activemq.apache.org <users@activemq.apache.org>
> > Subject: Re: AMQ222139
> > Justin,
> >
> > This randomly happens throughout the day.  We are not sure what is
> causing
> > it.
> >
> > Use case:
> >
> > We have a number of accounts, and each account gets its own topic.  We
> > create the topics on demand as accounts are created and let Artemis
> delete
> > them after 30 min of inactivity.  We can have 600 accounts active at a
> > time, so the topic number should be around that number.  I would estimate
> > we produce 1-3 messages per account per second which ends up being
> between
> > 50,000,00 and 200,000,000 messages per day.
> >
> > We have a 3-node Artemis cluster.
> >
> > Regards,
> >
> > William Crowell
> >
> > From: Justin Bertram <jbert...@apache.org>
> > Date: Tuesday, May 28, 2024 at 4:28 PM
> > To: users@activemq.apache.org <users@activemq.apache.org>
> > Subject: Re: AMQ222139
> > Do you have a way to reproduce this? Can you elaborate at all on the
> > configuration, use-case, etc. which resulted in this? What has been the
> > impact?
> >
> >
> > Justin
> >
> > On Tue, May 28, 2024 at 3:01 PM William Crowell
> > <wcrow...@perforce.com.invalid> wrote:
> >
> > > Justin,
> > >
> > > I do not think I have that situation:
> > >
> > > …
> > >       <cluster-connections>
> > >          <cluster-connection name="my-cluster">
> > >             <connector-ref>artemis</connector-ref>
> > >             <message-load-balancing>ON_DEMAND</message-load-balancing>
> > >             <max-hops>1</max-hops>
> > >             <static-connectors>
> > >                <connector-ref>node0</connector-ref>
> > >                <connector-ref>node1</connector-ref>
> > >             </static-connectors>
> > >          </cluster-connection>
> > >       </cluster-connections>
> > > …
> > >
> > > Regards,
> > >
> > > William Crowell
> > >
> > > From: Justin Bertram <jbert...@apache.org>
> > > Date: Tuesday, May 28, 2024 at 3:57 PM
> > > To: users@activemq.apache.org <users@activemq.apache.org>
> > > Subject: Re: AMQ222139
> > > > Where would I see if I had multiple cluster connections to the same
> > nodes
> > > using overlapping addresses?  Would that be in broker.xml?
> > >
> > > Yes. That would be in broker.xml.
> > >
> > >
> > > Justin
> > >
> > > On Tue, May 28, 2024 at 2:39 PM William Crowell
> > > <wcrow...@perforce.com.invalid> wrote:
> > >
> > > > Justin,
> > > >
> > > > Where would I see if I had multiple cluster connections to the same
> > nodes
> > > > using overlapping addresses?  Would that be in broker.xml?
> > > >
> > > > I thought I was running into this, but we do not use temporary
> queues:
> > > >
> > >
> >
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-1967&data=05%7C02%7CWCrowell%40perforce.com%7C84c014c39a814e0615cd08dc80e7debe%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638526980595026669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=URiUIs%2F4QCh21kf8jtrWI9o%2FBTxyIOnxMuJn%2Bm6SkcQ%3D&reserved=0
> <https://issues.apache.org/jira/browse/ARTEMIS-1967>
> > <
> >
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-1967&data=05%7C02%7CWCrowell%40perforce.com%7C84c014c39a814e0615cd08dc80e7debe%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638526980595038921%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=aSsxUMDlZuVPAYsoECKC465CtNQEE3XEB5JFMxiDiNE%3D&reserved=0
> <https://issues.apache.org/jira/browse/ARTEMIS-1967>
> > ><
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-1967&data=05%7C02%7CWCrowell%40perforce.com%7C84c014c39a814e0615cd08dc80e7debe%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638526980595042693%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=MiS5RQgfk1I39TqoMfJLKKtDlPIKvKlbfBvCBh0K7iU%3D&reserved=0
> <https://issues.apache.org/jira/browse/ARTEMIS-1967>>
> > > <
> >
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-1967&data=05%7C02%7CWCrowell%40perforce.com%7C84c014c39a814e0615cd08dc80e7debe%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638526980595045753%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=DnzAoOtWL1C1FVDQVSBAeb5l7c21Nza%2Bc8aRmaaGGOA%3D&reserved=0
> <https://issues.apache.org/jira/browse/ARTEMIS-1967>
> > <
> >
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-1967&data=05%7C02%7CWCrowell%40perforce.com%7C84c014c39a814e0615cd08dc80e7debe%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638526980595048711%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=nHTCaHjXe1BAZWLBscJCYOPqqAEBbwT7Qrve9myBWFk%3D&reserved=0
> <https://issues.apache.org/jira/browse/ARTEMIS-1967>
> > <
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-1967&data=05%7C02%7CWCrowell%40perforce.com%7C84c014c39a814e0615cd08dc80e7debe%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638526980595051710%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=%2FArLPkFn4c%2FuwyqeV1Tt6slKe1PCAqDRlLApgYslYpc%3D&reserved=0
> <https://issues.apache.org/jira/browse/ARTEMIS-1967>>>>
> > > >
> > > > This is Artemis 2.33.0.
> > > >
> > > > Regards,
> > > >
> > > > William Crowell
> > > >
> > > > From: Justin Bertram <jbert...@apache.org>
> > > > Date: Tuesday, May 28, 2024 at 1:29 PM
> > > > To: users@activemq.apache.org <users@activemq.apache.org>
> > > > Subject: Re: AMQ222139
> > > > As far as I know the only conditions that would result in this
> > situation
> > > > are described in the warning message.
> > > >
> > > > Do you have multiple cluster connections to the same nodes using
> > > > overlapping addresses?
> > > >
> > > > Do you have a way to reproduce this? Can you elaborate at all on the
> > > > configuration, use-case, etc. which resulted in this? What has been
> the
> > > > impact?
> > > >
> > > > Lastly, what version of ActiveMQ Artemis are you using?
> > > >
> > > >
> > > > Justin
> > > >
> > > > On Tue, May 28, 2024 at 7:40 AM William Crowell
> > > > <wcrow...@perforce.com.invalid> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > What would cause AMQ222139?  I have max-hops set to 1.
> > > > >
> > > > > 2024-05-24 17:28:04,155 WARN
> > [org.apache.activemq.artemis.core.server]
> > > > > AMQ222139: MessageFlowRecordImpl
> > > > > [nodeID=2135063f-0407-11ef-9fff-0242ac110002,
> > > > > connector=TransportConfiguration(name=artemis,
> > > > >
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)?port=61617&host=mqtt-7727-node1-boxview-internal,
> > > > >
> > > >
> > >
> >
> queueName=$.artemis.internal.sf.my-cluster.2135063f-0407-11ef-9fff-0242ac110002,
> > > > >
> > > >
> > >
> >
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.2135063f-0407-11ef-9fff-0242ac110002,
> > > > > postOffice=PostOfficeImpl
> > > > > [server=ActiveMQServerImpl::name=mqtt-7727-node1.boxview.internal],
> > > > > temp=false]@3548f813, isClosed=false, reset=true]::Remote queue
> > binding
> > > > >
> > >
> fa28ea36-19f2-11ef-b6bc-0242ac1100029a5d5fea-03fb-11ef-acb5-0242ac110002
> > > > > has already been bound in the post office. Most likely cause for
> this
> > > is
> > > > > you have a loop in your cluster due to cluster max-hops being too
> > large
> > > > or
> > > > > you have multiple cluster connections to the same nodes using
> > > overlapping
> > > > > addresses
> > > > >
> > > > > Regards,
> > > > >
> > > > > Bill Crowell
> > > > >
> > > > >
> > > > > This e-mail may contain information that is privileged or
> > confidential.
> > > > If
> > > > > you are not the intended recipient, please delete the e-mail and
> any
> > > > > attachments and notify us immediately.
> > > > >
> > > > >
> > > >
> > > >
> > > > CAUTION: This email originated from outside of the organization. Do
> not
> > > > click on links or open attachments unless you recognize the sender
> and
> > > know
> > > > the content is safe.
> > > >
> > > >
> > > > This e-mail may contain information that is privileged or
> confidential.
> > > If
> > > > you are not the intended recipient, please delete the e-mail and any
> > > > attachments and notify us immediately.
> > > >
> > > >
> > >
> > >
> > > CAUTION: This email originated from outside of the organization. Do not
> > > click on links or open attachments unless you recognize the sender and
> > know
> > > the content is safe.
> > >
> > >
> > > This e-mail may contain information that is privileged or confidential.
> > If
> > > you are not the intended recipient, please delete the e-mail and any
> > > attachments and notify us immediately.
> > >
> > >
> >
> >
> > CAUTION: This email originated from outside of the organization. Do not
> > click on links or open attachments unless you recognize the sender and
> know
> > the content is safe.
> >
> >
> > This e-mail may contain information that is privileged or confidential.
> If
> > you are not the intended recipient, please delete the e-mail and any
> > attachments and notify us immediately.
> >
> >
> > This e-mail may contain information that is privileged or confidential.
> If
> > you are not the intended recipient, please delete the e-mail and any
> > attachments and notify us immediately.
> >
> >
>
>
> CAUTION: This email originated from outside of the organization. Do not
> click on links or open attachments unless you recognize the sender and know
> the content is safe.
>
>
> This e-mail may contain information that is privileged or confidential. If
> you are not the intended recipient, please delete the e-mail and any
> attachments and notify us immediately.
>
>

Reply via email to