Hi JB,

+1 from me in general.


I have to say, i like the idea of separating control plane from data plane, 
using zk for topology and leader election. Its something that i think we need 
to do in Artemis tbh, too.


Re kahadb in particular i think there were alot of issues, and thats why it was 
removed/deprecated originally no? I think it be good to understand a bit more 
here, else we risk re-introducing a historic issue.


I do like though the idea maybe instead of using bookkeeper for the storage 
layer as an alternative though, it has done well for storage layer for Apache 
Pulsar project, and been very scale-able setup, i wonder with that would we be 
in the realms of matching those kinds of systems with scalability with this new 
setup? But with the advanced features and compatibility that activemq brings....


Best
Mike





On 17 February 2021 at 20:44, JB Onofré <[email protected]> wrote:


Hi everyone

On a cloud environment, our current ActiveMQ5 topologies have limitations:

- master slave works fine but requires either a shared file system (for 
instance AWS EFS) or database. And it also means that we only have one broker 
active at a time.
- network of broker can be used to have kind of partitioning of messages across 
several brokers. However if we have pending messages on a broker and we lost 
this broker, the messages on this one are not available until we restart the 
broker (with the same file system).

The idea of replicatedKahaDB is to replicate messages from one kahadb to 
another one. If we lost the broker, a broker with the replica is able to load 
the messages to be available.

I started to work on this implementation:
- adding a new configuration element as persistence adapter
- adding zookeeper client m, zookeeper is used as topology storage, heartbeat 
and leader élection
- I’m evaluating the use of bookkeeper as well (directly as storage)

I will share a branch on my local repo with you soon.

Any comment is welcome.

Thanks
Regards
JB

Reply via email to