Anuj and Martyn, were your question and your answer about ActiveMQ 5.x or Artemis? I think the question was about 5.x and Martyn's answer was about Artemis. If so, see a 5.x response below.
I don't believe that there is any reason other than the potential for message loss that you described (correctly) in #2 that would make the page you linked to in #1 say that you need to use non-journaled JDBC. If your use case can support the loss of messages since the last checkpoint, I expect that you can make it work. If you choose to pursue it, you may want to use pluggable storage lockers ( http://activemq.apache.org/pluggable-storage-lockers.html) for master elections if whatever is built into the journaled JDBC code doesn't work; I have no idea how locks would interact with journaling. Tim On Apr 26, 2017 3:16 AM, "Martyn Taylor" <mtay...@redhat.com> wrote: > Hi > > On Wed, Apr 26, 2017 at 6:51 AM, khandelwalanuj < > anuj.cool.khandel...@gmail.com> wrote: > > > Hi, > > > > I was evaluating ActiveMQ's JDBC based persistent store with journaling > > enabled. Have couple of doubts: > > > > 1. I was reading > > https://access.redhat.com/documentation/en-US/Fuse_ > > Message_Broker/5.4/html/Clustering_Guide/files/ > > Failover-MasterSlave-JDBC.html > > which says that the journaled JDBC store is incompatible with the JDBC > > master/slave failover pattern. Does it mean client automatic failover > won't > > happen here ? What does it exactly mean ? > > > Right now we don't have support for automated HA using the JDBC store. The > JDBC store is a relatively new feature and the focus thus far has been on > stability. However, adding HA is something we have planned. > > > > > 2. If master slave is properly supported, is persistent message loss is > > possible in journaled JDBC. Below is one scenario: > > " If both the brokers are running on two different hosts and each one > > has it's own journal store(activemq-data) and both shares the same > backend > > DB (let's Say postgres). In journaled jdbc if both producer and consumer > > are > > up, it won't store the message in backend DB which are received between a > > single checkpoint. It will keep it in journal store(activemq-data). Now > > let's say at this point to time if master goes down, does the messages > > which > > are sent to the master but not yet delivered to the consumer, are lost as > > new master will start from it's own journal and same same backend DB, but > > since backend DB doesn't have the message. So is it lost ? " > > > > I'm not 100% following the scenario here. But in short, if you are using > persistent messages (or an appropriate quality of service). You'll never > get in a situation where you have messages lost. This is one of the > fundamental requirements of the broker. > > > > > Thanks, > > Anuj > > > > > > > > -- > > View this message in context: http://activemq.2283324.n4. > > nabble.com/ActiveMQ-JDBC-with-journaling-enabled-tp4725234.html > > Sent from the ActiveMQ - User mailing list archive at Nabble.com. > > >