It looks like the issues were related to Artemis somehow not always having a
complete cluster topology after a sequence of shutdown/scaledown and failback.
I changed the cluster connections to use udp discovery/broadcast groups instead
of static tcp connectors. This seems to have been a workarou
Did you call start() on the PooledConnectionFactory?
Tim
On Tue, Mar 20, 2018, 8:44 PM Chester_Zheng wrote:
> Hello you guys,I'm using ActiveMQ for a couple of months. But It works a
> little bit bad recently."The Strange Exception" is that "Pool not open".
> This exception appear by my custom
It looks like you have a spurious quote at the end of the
element; could that be causing the broker to fail to
load?
When I add that mangled element into my own activemq.xml file, I get a
SAXParseException on the console, but you haven't mentioned seeing that so
maybe that's not it... Or maybe th
Hello you guys,I'm using ActiveMQ for a couple of months. But It works a
little bit bad recently."The Strange Exception" is that "Pool not open".
This exception appear by my custom producer.I use "PooledConnectionFactory"
to create a connection, and then create a session.Does any brother meet this
I read that option, but it never clicked for me that that might be the root
cause. I'm glad you figured it out. Thanks for reporting back here so
someone else reading this thread in the future will know to check that as a
possible explanation.
Tim
On Tue, Mar 20, 2018, 3:07 PM alainkr wrote:
>
Tim thanks alot for your answer.
Sorry for not checking in earlier, the issue I was having ( with 5.14.5) is
that I stumbled upon the following problem.
We used selectorAware=true but did not use the
virtualSelectorCacheBrokerPlugin ( as in
https://github.com/apache/activemq/blob/master/active
I guess you'll have to weigh whether or not having an "idle spare backup"
is worth preventing fail-back for your application/use-case.
Justin
On Tue, Mar 20, 2018 at 2:18 PM, jarek.przygodzki <
jarek.przygod...@gmail.com> wrote:
> Thank you for detailed explanation. The idea was to provide some
Thank you for detailed explanation. The idea was to provide some sort of idle
spare backup to still have live/backup pair even after master node failure.
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Fail-back only works for a live/backup pair. It doesn't work for a
live/backup/backup triplet.
In your use-case there are 3 broker instances configured as live, backup,
backup. However, a backup server is owned by only one live server. This
means that when the 3 broker instances are started the
Hi,
I'm trying to setup 3-node Artemis (2.5.0) HA cluster (1 master and 2
slaves) with replication, automatic failover/failback and static connectors.
It works fine with just one slave, but with 2 slaves strange thing happens
- failback doesn't work, when original master comes back online both ser
On Tue, Mar 20, 2018, 4:18 AM norinos wrote:
> Hi Tim.
>
> I deleted "db-531.log" and re-started ActiveMQ, but failed to start because
> of following error.
>
>
>
> 2018-03-20 17:16:15,890 | ERROR | Looking for key 531 b
Hi Tim.
Sorry, because it contains confidential information, I can not attach the
file to the issue.
> So the question now is how you move forward. If you're able to live with
> reprocessing the 511 messages that those acks acknowledged, then just
> delete that file and continue on without it.
12 matches
Mail list logo