Thank you for the detailed explanation. The part about datastore locking and
how brokers behave is more or less clear.
So would you recommend using randomize=false? We will have moments where slave
becomes master and stays that way for extensive period of time. Does this mean
that with randomize=false clients will still have to wait timeout=x amount of
time every time they connect and the old master (which is first in the URI
list) doesn't respond to them because it is now slave? Or should I just use
TransportListener on the client side and ignore randomization parameter?
As an alternative, does anybody know if I can use non-HTTP SSL load balancer
and set client URI to something like ssl://loadbalancer_host:61616 ? I'm
thinking, if slave servers do not respond to the request until they become
master maybe that would allow me to have a simpler configuration for my
clients. If I will ever need to add more slaves I would just add them under the
same load balancer.
If that's possible, which of the methods will be faster? We are deploying a
point-of-sale application and I want the failover to be done in an instant,
without losing any transactions (if that's possible :)).
--
Vilius
-----Original Message-----
From: Jean-Baptiste Onofré <[email protected]>
Sent: Tuesday, November 30, 2021 6:01 PM
To: [email protected]
Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
Hi,
masterslave: transport is deprecated. You can achieve the same with
randomize=false basically.
Correct: updateClusterClientOnRemove is only for network connection, but when
you have active/active (so a real network).
No, the clients won't be stuck: they will reconnect to the new master.
Let me illustrate this:
- you have a NFS shared filesystem on machine C
- machine A mount NFS filesystem (from C) on /opt/kahadb
- machine B mount NFS filesystem (from C) on /opt/kahadb
- you start brokerA on machineA, brokerA is the master (transport connector tcp
on 61616)
- you start brokerB on machineB, brokerB is a slave (transport connector tcp on
61616, but not bound as the broker is waiting for the lock)
- in your client connection factory, you configure the broker URL with
failover:(tcp://machineA:61616,tcp://machineB:61616)
- as brokerA is master, your clients are connected to brokerA
- you shutdown brokerA, brokerB will take the lock and become the new master
- your clients will automatically reconnect to brokerB
- you start brokerA, it's now a slave (as the lock is on brokerB)
Regards
JB
On 30/11/2021 09:45, Vilius Šumskas wrote:
> Thank you for your response!
>
> Just out of curiosity, what is this masterslave:() transport is about then?
>
> Also, if I don't configure network connection will
> updateClusterClientsOnRemove parameter take effect?
>
> My main concern is that clients will go into stuck state during/after the
> failover. I'm not sure if everything I need is just handle this in the code
> with TransportListener or do I need to set updateClusterClientsOnRemove and
> updateClusterClients on the broker side to make failover smooth?
>