Hi

On Wed, Apr 26, 2017 at 6:51 AM, khandelwalanuj <
anuj.cool.khandel...@gmail.com> wrote:

> Hi,
>
> I was evaluating ActiveMQ's JDBC based persistent store with journaling
> enabled. Have couple of doubts:
>
> 1. I was reading
> https://access.redhat.com/documentation/en-US/Fuse_
> Message_Broker/5.4/html/Clustering_Guide/files/
> Failover-MasterSlave-JDBC.html
> which says that  the journaled JDBC store is incompatible with the JDBC
> master/slave failover pattern. Does it mean client automatic failover won't
> happen here ? What does it exactly mean ?
>
Right now we don't have support for automated HA using the JDBC store.  The
JDBC store is a relatively new feature and the focus thus far has been on
stability.  However, adding HA is something we have planned.

>
> 2. If master slave is properly supported, is persistent message loss is
> possible in journaled JDBC. Below is one scenario:
>     " If both the brokers are running on two different hosts and each one
> has it's own journal store(activemq-data) and both shares the same backend
> DB (let's Say postgres). In journaled jdbc if both producer and consumer
> are
> up, it won't store the message in backend DB which are received between a
> single checkpoint. It will keep it in journal store(activemq-data). Now
> let's say at this point to time if master goes down, does the messages
> which
> are sent to the master but not yet delivered to the consumer, are lost as
> new master will start from it's own journal and same same backend DB, but
> since backend DB doesn't have the message. So is it lost ? "
>

I'm not 100% following the scenario here.  But in short, if you are using
persistent messages (or an appropriate quality of service).  You'll never
get in a situation where you have messages lost.  This is one of the
fundamental requirements of the broker.

>
> Thanks,
> Anuj
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.
> nabble.com/ActiveMQ-JDBC-with-journaling-enabled-tp4725234.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Reply via email to