+1 for adding a replication backend
+1 for keeping KahaDB intact for stability
+1 for adding the replication as a separate backend adapter (perhaps it depends 
and extends activemq-kahadb-store vs direct modification?)
+1 on considering alternatives to ZooKeeper

IMHO— fully-synchronous replication in distributed computing is more legend 
than reality. Its only practical on very reliable low-latency networks and 
there needs to be a plan for prolonged split-brain.

-Matt

> On Feb 18, 2021, at 8:55 AM, Christopher Shannon 
> <[email protected]> wrote:
> 
> I like the idea of having a distributed store for 5.x (and Artemis too) but
> I don't think it makes sense to mess with KafkaDB at this point as it is
> quite stable and I think it would be tough to get it to work properly.
> There were a ton of problems with Zookeeper and LevelDB which is one reason
> it was deprecated. If you do want to go that route I would try and keep the
> original KahaDB intact at least to not break existing users.
> 
> If you want to work on this my vote would be to try to use BookKeeper and
> just write a store implementation for it which should be much easier. I
> figure it makes sense to use an existing product that is designed to be
> replicated so we don't have to reinvent the wheel. I also think Artemis
> could potentially benefit from BookKeeper as well. Having multiple store
> choices would be good for users.
> 
> On Thu, Feb 18, 2021 at 8:03 AM Michael André Pearce
> <[email protected]> wrote:
> 
>> Hi JB,
>> 
>> +1 from me in general.
>> 
>> I have to say, i like the idea of separating control plane from data
>> plane, using zk for topology and leader election. Its something that i
>> think we need to do in Artemis tbh, too.
>> 
>> Re kahadb in particular i think there were alot of issues, and thats why
>> it was removed/deprecated originally no? I think it be good to understand a
>> bit more here, else we risk re-introducing a historic issue.
>> 
>> I do like though the idea maybe instead of using bookkeeper for the
>> storage layer as an alternative though, it has done well for storage layer
>> for Apache Pulsar project, and been very scale-able setup, i wonder with
>> that would we be in the realms of matching those kinds of systems with
>> scalability with this new setup? But with the advanced features and
>> compatibility that activemq brings....
>> 
>> Best
>> Mike
>> 
>> 
>> 
>> 
>> On 17 February 2021 at 20:44, JB Onofré <[email protected]> wrote:
>> 
>> Hi everyone
>> 
>> On a cloud environment, our current ActiveMQ5 topologies have limitations:
>> 
>> - master slave works fine but requires either a shared file system (for
>> instance AWS EFS) or database. And it also means that we only have one
>> broker active at a time.
>> - network of broker can be used to have kind of partitioning of messages
>> across several brokers. However if we have pending messages on a broker and
>> we lost this broker, the messages on this one are not available until we
>> restart the broker (with the same file system).
>> 
>> The idea of replicatedKahaDB is to replicate messages from one kahadb to
>> another one. If we lost the broker, a broker with the replica is able to
>> load the messages to be available.
>> 
>> I started to work on this implementation:
>> - adding a new configuration element as persistence adapter
>> - adding zookeeper client m, zookeeper is used as topology storage,
>> heartbeat and leader élection
>> - I’m evaluating the use of bookkeeper as well (directly as storage)
>> 
>> I will share a branch on my local repo with you soon.
>> 
>> Any comment is welcome.
>> 
>> Thanks
>> Regards
>> JB
>> 
>> 

Reply via email to