if you are using multiple brokers, it is best to distribute across
pods, such that a single pod failure do not result in a complete
outage.
On Fri, 11 Feb 2022 at 15:10, Thai Le wrote:
>
> Hi guys
> Thank you very much for sharing.
> @Vilius I have tried the artemis-operator and the setup is much
Hi guys
Thank you very much for sharing.
@Vilius I have tried the artemis-operator and the setup is much simpler
than my current one, although it is best used for scalability and there is
no slave deployed so the HA is consider the same as option 4. The only
thing prevents us from using it is the m
Gary,
so an HA solution for Artemis running on Kubernetes is not worth it, as we
expect Kubernetes to recover anyway?
If a producer loses connection to the Artemis instance would you not lose
the data? Or would a typical client try to resubmit it, or would the
client/application need to be designe
Hello,
the reconnect issue? How are your clients configured? Do they get
topology from the pair of brokers on kube?
--
On re-connection:
failover with the Artemis jms client will only occur between pairs.
It is restricted in that way to protect users of temp queues and
durable subs, b/c those re
users@activemq.apache.org
Subject: Re: Artemis high availability in Kubernetes
Hello,
I'm also interested in the recommended setup on Kubernetes for a HA ActiveMQ
Artemis broker.
What is possible, what is not? Active/Passive setup is that possible/supported
on Kubernetes, or does that not make sens
Hello,
I'm also interested in the recommended setup on Kubernetes for a HA
ActiveMQ Artemis broker.
What is possible, what is not? Active/Passive setup is that
possible/supported on Kubernetes, or does that not make sense?
What is recommended?
Best Regards,
Jo
Op wo 9 feb. 2022 om 19:00 schreef
Hello,
We have been running artemis 2.17 with replication HA policy (1 master and
1 slave) in kubnernetes for a few months. I was advised to run artemis
without HA in kubernetes since pod will be restarted anyway but my setup
was a team decision so I did not make any change. Recently we had a few