Just re-discovered this...
I tested this scenario with 2.4.0 and the attached configs, and I couldn't
make it fail like described in the original message. Fail-over and
fail-back worked without any of the issues described.
> To us, this meant that the current master (which was the original
Have you resolved the issue somehow?
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Me experience the same thing. Need a clear explanation.
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Anyone have any insights into the Artemis behavior described here?
Thanks,
Anindya Haldar
Oracle Marketing Cloud
> On Jun 26, 2018, at 4:50 PM, Anindya Haldar wrote:
>
> The setup:
>
> - We are setting up a cluster of 6 brokers using Artemis 2.4.0.
> - The cluster has 3 groups.
> - Each
The setup:
- We are setting up a cluster of 6 brokers using Artemis 2.4.0.
- The cluster has 3 groups.
- Each group has one master, and one slave broker pair.
- The HA uses replication.
- Each master broker configuration has the flag ‘check-for-live-server’ set to
true.
- Each slave broker