er do
> the
> job.
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.
> nabble.com/How-to-Force-a-Connection-Failover-in-an-
> ActiveMQ-Replicated-LevelDB-Cluster-tp4720385p4720571.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
The JMX operations "stop", "start" (or "restart") on the Master Broker do the
job.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/How-to-Force-a-Connection-Failover-in-an-ActiveMQ-Replicated-LevelDB-Cluster-tp4720385p4720571.html
Sent fr
I haven't personally tried this, but I'd expect that calling stop() on the
networkConnector for the master broker via JMX would do what you want
without requiring a full restart. (Then you could call start() to bring it
back, as the slave.) http://activemq.apache.org/jmx.html has a table that
Currently I stop the ActiveMQ Master Node (by a systemd service) to force a
Slave become the new Master.
Afterwards I restart the stopped ActiveMQ Master Node (by a systemd
service) which will become a new Slave.
Is there another way to Force a Connection Failover without stopping and
restarting
Tim,
I have submitted a bug report in JIRA :
https://issues.apache.org/jira/browse/AMQ-6502
Patrick
--
View this message in context:
http://activemq.2283324.n4.nabble.com/StorePercentUsage-increasing-above-100-for-slaves-in-ActiveMQ-Replicated-LevelDB-Cluster-5-14-1-tp4719072p4719105.html
Can you reproduce this reliably? If so, please submit a bug report in
JIRA.
Tim
On Nov 8, 2016 1:40 AM, "Patrick Vansevenant"
wrote:
The StorePercentUsage of the master node is increasing and decreasing but
remains more or less stable
over the time.
The
The StorePercentUsage of the master node is increasing and decreasing but
remains more or less stable
over the time.
The StorePercentUsage of the slave nodes (JMX), on the other hand, is
increasing all the time.
I have already noticed values of 200% and more.
The HA Cluster seems to work and I
Hi I am also facing the same issue.Kindly suggest the solution.I have
restarted zookeeper one after other in three servers.2. after starting the
slave server activemq i am getting above issue reported.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB
mq.2283324.n4.nabble.com/Replicated-LevelDB-uses-Apache-ZooKeeper-problem-tp4709020p4715593.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
tions from one broker go to whichever
member of a master/slave grouping is the active one at any point.
What you've described in this thread sounds like you're doing master/slave
using Zookeeper and replicated LevelDB, which is why you only have one
active broker at a time. If you want a network
Thank you very much. that's what I understood from the master / slave
what I want to do is http://activemq.apache.org/networks-of-brokers.html
But I always have a master / slave behavior
A+JYT
--
View this message in context:
http://activemq.2283324.n4.nabble.com/broker-network-replicated
m/broker-network-replicated-levelDB-config-problem-tp4711416p4711417.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
message.
I'm trying to make a active/active cluster not master/slave. I need to
secure it. I can't lost any messages.
A+JYT
PS I'm running activeMQ 5.11.1 and zookeeper 3.4.6
--
View this message in context:
http://activemq.2283324.n4.nabble.com/broker-network-replicated-levelDB-config-problem
message in context:
http://activemq.2283324.n4.nabble.com/broker-network-replicated-levelDB-config-problem-tp4711416.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
lot-of-WARNs-invalid-logRefDecrement-on-the-slaves-of-Replicated-LevelDB-cluster-tp4710993.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
lot-of-WARNs-invalid-logRefDecrement-on-the-slaves-of-Replicated-LevelDB-cluster-tp4710994.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
activemq replicated leveldb on 3 servers with two nics, a 1Gbit
> and a 10Gbit.
> I want the servers to communicate via the 10Gbit nics to which the clients
> do not have access.
> If possible, how configure?
> Thanks
>
>
>
> --
> View this message in context:
> http
g the pure java LevelDB
implementation. | org.apache.activemq.leveldb.LevelDBClient |
hawtdispatch-DEFAULT-1
Please advise if somebody have any idea
Thanks in advance
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-uses-Apache-ZooKeeper-problem
can impact the
cluster.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-Controlling-ACL-on-Znodes-tp4708957.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
I have created 2 Jira’s for issues I have encountered with ActiveMQ with
Replicated LevelDB. The latest issue I am encountering is more critical as the
earlier one can be gotten around with NFSv3.
https://issues.apache.org/jira/browse/AMQ-6173
<https://issues.apache.org/jira/browse/AMQ-6
Please do; NFS issues are a known problem for KahaDB where you have a
shared filesystem, but replicated LevelDB isn't supposed to have any shared
filesystem elements and I've never heard of anyone having similar issues.
Please document the options you're using for both your NFSv3 and NFSv4
Hi Tim,
I have been researching ActiveMQ with Replicated LevelDB for more than a
month. I couldn’t figure out why it won’t failover back to the first server in
the lineup. After lots of elimination process, I ended up suspecting the NFSv4
as I had issues with NFSv4 in non-ActiveMQ
I can't help you (I've never used LevelDB) and this list doesn't have a
resident LevelDB expert, but this sounds like a real bug so please submit a
bug in JIRA if you haven't already. Can you reliably reproduce the problem?
On Jan 31, 2016 11:33 PM, "Sunil Vishwanath" wrote:
>
The problem went away after the filesystem was downgraded to NFSv3 from
NFSv4.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Exception-encounter-ActiveMQ-with-Replicated-LevelDB-tp4706710p4706761.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Hello,
I am very new to this forum and I am hoping that you all will see this email.
Currently I am testing the following setup:
ActiveMQ 5.13.0 with LevelDB (3 node cluster).
Zookeeper 3.4.6 (3 node cluster).
Started up all 3 Zookeeper nodes.
Started up all 3 ActiveMQ nodes.
As I started
1-st slave
StorePercentUsage=510
TempPercentUsage=146
and on 2-nd slave
StorePercentUsage=424
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-and-Replicated-LevelDB-Store-tp4702593p4702987.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
brokers in a LevelDB cluster have vastly different
StorePercentUsage values?
Are there any best practices on managing and monitoring ActiveMQ?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-and-Replicated-LevelDB-Store-tp4702593p4702732.html
Sent from
Master:
> du -hs /var/activemq/data/
> 266M /var/activemq/data
> Slave1:
> du -hs /var/activemq/data/
> 124M /var/activemq/data
> Slave2:
> du -hs /var/activemq/data/
> 119M /var/activemq/data
>
> So question is what means
> StorePercentUsage=93
> Te
means
StorePercentUsage=93
TempPercentUsage=146
then used Replicated LevelDB Store
Or in another words why Replicated LevelDB Store ignores this settings
StoreLimit=8589934592
TempLimit=5368709120
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-and-Replicated
NFS trouble; did that happen again here?
On Aug 19, 2015 1:14 AM, khandelwalanuj khandelwal.anu...@gmail.com
wrote:
hi,
I am using replicated leveldb with ActiveMQ.5.11.1. We have recently
observed an issue where when client connection removal caused broker to
shutdown. This is very weird
suspecting that could this cause broker to stop ?
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Durable-client-removal-caused-broker-to-shutdown-in-replicated-leveldb-tp4701209p4701241.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
hi,
I am using replicated leveldb with ActiveMQ.5.11.1. We have recently
observed an issue where when client connection removal caused broker to
shutdown. This is very weird. The client is a durable consumer who uses
transactions and calls commit after each 10 messages.
Can someone suggest what
Did anyone get a chance to look at this ?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Durable-client-removal-caused-broker-to-shutdown-in-replicated-leveldb-tp4701209p4701232.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
via zookeeper. So both of them sends
their updated positions to the zookeeper and whoever has the latest position
becomes the master.
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-with-2-brokers-replica-1-tp4699679p4699777.html
in my case a single broker can also serve the client(1/2 +1 = 1)
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-with-2-brokers-replica-1-tp4699679p4699749.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-with-2-brokers-replica-1-tp4699679p4699739.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Hi Jim,
But I am using replica =1 and
(quorum = replica/2+1) AND quorum != (no of broker/2 +1).
So in my case a single broker can also serve the client(1/2 +1 = 1)
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-with-2
Any updates here.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-with-2-brokers-replica-1-tp4699679p4699739.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-with-2-brokers-replica-1-tp4699679.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
I've had a dev cluster running for a little while now and twice I've seen
interruptions where the cluster didn't recover, didn't select a new master.
I had hoped AMQ-5082 fixed that issue but it looks like there might be
additional problems. How many of you folks are running replicated leveldb
in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-Apollo-and-Replicated-LevelDB-Store-tp4695502p4695529.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
There is no such thing as ActiveMQ 6, yet.
The code base of JBoss HornetQ was donated to the ActiveMQ community. That
contribution was initially named ActiveMQ 6 (hence the git repo name) but
is now known as ActiveMQ Artemis (Apollos twin sister in mythology).
After the donation, a lot of work
there isn't any active development on Apollo at the moment... but if you're
keen to use it, and willing to contribute back, i know there are a few
others that would offer to help :) but at the moment apollo doesn't have
replicated leveldb. you could also take a look at ActiveMQ Artemis which
does
Hi All,
Sorry if this is a new bie question
Can Apollo be used with replicated levelDB store?
We recently benchmarked apollo and found it very suitable for our usecase.
However we would like to have the option for HA and Failover.
Is there any doc explaining integrating replicated levelDB
On Tue, Mar 31, 2015 at 12:08 PM, wonderkind kevin...@am.sony.com wrote:
Is your broker running pretty clean when you send messages through the
fabric of network of replicated master/slaves?
Yes, though I'm only running the examples/openwire/swissarmy
producer/consumer example using the
, 2015 at 5:29 PM
To: SNEI kevin...@am.sony.commailto:kevin...@am.sony.com
Subject: Re: Creating a Network of Replicated LevelDB broker clusters
On Mon, Mar 30, 2015 at 4:57 PM, wonderkind [hidden
email]/user/SendEmail.jtp?type=nodenode=4694097i=0 wrote:
Do you have a sample activemq.xml
+s2283324n4694156...@n4.nabble.commailto:ml-node+s2283324n4694156...@n4.nabble.com
Date: Tuesday, March 31, 2015 at 12:37 PM
To: SNEI kevin...@am.sony.commailto:kevin...@am.sony.com
Subject: Re: Creating a Network of Replicated LevelDB broker clusters
On Tue, Mar 31, 2015 at 12:08 PM, wonderkind
On Tue, Mar 31, 2015 at 1:32 PM, wonderkind kevin...@am.sony.com wrote:
What version of ActiveMQ are you running with? I am working with 5.11.1..
I'm running 5.11.1 plus the patch in ticket AMQ-5082.
Jim
Hi Jim,
How big is your network? Do you see any significant degradation with a network
of replicated levelDB brokers? Please see my comment below in red on my
original post on the forum.
Thanks.
Kevin
From: James A. Robinson-2 [via ActiveMQ]
ml-node+s2283324n4694015...@n4
On Mon, Mar 30, 2015 at 2:09 PM, wonderkind kevin...@am.sony.com wrote:
How big is your network? Do you see any significant degradation with a
network of replicated levelDB brokers?
I haven't started stress testing the configuration yet, the current plan
is to have network two clusters
:43 PM
To: SNEI kevin...@am.sony.commailto:kevin...@am.sony.com
Subject: Re: Creating a Network of Replicated LevelDB broker clusters
On Mon, Mar 30, 2015 at 2:09 PM, wonderkind [hidden
email]/user/SendEmail.jtp?type=nodenode=4694084i=0 wrote:
How big is your network? Do you see any significant
. --
managementContext createConnector=false /
/managementContext
persistenceAdapter
!-- http://activemq.apache.org/replicated-leveldb-store.html --
replicatedLevelDB
replicas=3
zkSessionTimeout=5s
zkPath=/activemq/amq-prod-1
zkAddress=zk1.example.org:2181,zk2.example.org:2182,zk3.example.org:2181,
zk4
Has anyone ever created a network of brokers, with each broker been a
Master/Slave replicated LevelDB store?
Is this possible? Any problem with stability during new master election?
Thanks
On Fri, Mar 27, 2015 at 11:48 AM, wonderkind kevin...@am.sony.com wrote:
Has anyone ever created a network of brokers, with each broker been a
Master/Slave replicated LevelDB store?
My understanding has been that, for replicated LevelDB, you need
a set of three brokers, one master two slave
So I think the problem is that
org.linkedin.zookeeper.tracker.ZooKeeperTreeTracker
doesn't appear to handle the event of a session disconnect.
Or at least the version used by ActiveMQ doesn't...
If I force tree to be rebuilt on a reconnect, my earlier unit test
passes:
I think you are correct here. The rebuild should work so long as the
session has not expired.
On 11 March 2015 at 20:51, James A. Robinson j...@highwire.org wrote:
So I think the problem is that
org.linkedin.zookeeper.tracker.ZooKeeperTreeTracker
doesn't appear to handle the event of a
Working my way through the code and the debug log from
the test, I see that the ZooKeeper group is getting emptied
out after session expiration occurs:
before the timeout:
2015-03-10 12:09:50,614 | DEBUG | ActiveMQ Task | ZooKeeper group for
01 changed: Map(foo -
On Wed, Mar 4, 2015 at 12:29 PM, James A. Robinson j...@highwire.org wrote:
Thanks. I'm pretty sure AMQ-5082 is what I'm seeing on 5.11.1.
I'll see if I can get the cycles to set up a unit test to replicate the
issue.
I think I've got the use a case represented for
folks,
While testing out ActiveMQ I've been building clusters
VirtualBox. I've been spinning up two 3-node Replicated
LevelDB stores on my laptop.
I've noticed that the clusters can sometimes get into a
state where none of the nodes is the master. It appears
to me as though it's an issue
).
On Mar 3, 2015 8:23 AM, James A. Robinson jim.robin...@gmail.com wrote:
Hi folks,
While testing out ActiveMQ I've been building clusters
VirtualBox. I've been spinning up two 3-node Replicated
LevelDB stores on my laptop.
I've noticed that the clusters can sometimes get into a
state
Hi folks,
While testing out ActiveMQ I've been building clusters
VirtualBox. I've been spinning up two 3-node Replicated
LevelDB stores on my laptop.
I've noticed that the clusters can sometimes get into a
state where none of the nodes is the master. It appears
to me as though it's an issue
We are currently selecting a suitable data store for our ActiveMQ
implementation.Are there any benchmarks or performance comparisons for the
JDBC and replicated LevelDB data store options, other than LevelDB is more
performant?
--
View this message in context:
http://activemq.2283324.n4
-
org.apache.activemq.broker.region.cursors.QueueStorePrefetch@72e4521c
:testpt,batchResetNeeded=false,storeHasMessages=true,size=2516,cacheEnabled=false,maxBatchSize:200,hasSpace:true
- Failed to fill batch
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB
As per
http://activemq.apache.org/replicated-leveldb-store.html
there is no additional requirements on the client.
Clients still need to know the location and address of each broker and specify
it inside the failover protocol (or use other forms of discovery such as
multicast).
In JBoss Fuse
Ref http://activemq.apache.org/replicated-leveldb-store.html
Is this page actually detailing that Zookeeper will maintain one master
BROKER and that writes through this broker will be replicated to slave
brokers?
I read - interpreted - the URL and title as being about leveldb data being
I'm having a little trouble connecting the dots with the source code, but it
does appear that setting is using an ActiveMQ transport connection.
Can you try using ssl: and report back the results?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-Store
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-cluster-failed-to-page-in-queue-messages-tp4688222.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Hi
I've just setup a three ActiveMQ 5.10 nodes in master/slave configuration
with replicated LevelDB. I've also setup the Zookeeper server in the same
nodes. Zookeeper seems running pretty much okay as far I can see.
When I crank up the first ActiveMQ node, it stops waiting for another node
Never mind. There is a issue already open against it. Fixed on 5.11
https://issues.apache.org/jira/browse/AMQ-5105
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-stucks-with-replicated-LevelDB-on-startup-tp4687634p4687662.html
Sent from the ActiveMQ - User
in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-stucks-with-replicated-LevelDB-on-startup-tp4687634p4687662.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Did anyone get a chance to look at this ?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-update-ordering-with-zookeeper-tp4686932p4687014.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Activemq vendors, please respond about above observations.
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-Manually-copying-data-from-one-leveldb-store-to-another-tp4686931p4686950.html
Sent from the ActiveMQ - User mailing list archive
Please respond .
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-replicated-leveldb-update-ordering-with-zookeeper-tp4686932p4686961.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Hi,
Is it possible to copy data from leveldb store of one broker to another
broker while using replicated leveldb.
For ex. : Let's say 3 brokers are running fine and replicating the data. I
suddenly stop one slave broker. After some time before starting the slave
Can I manually copy leveldb
? As mentioned
here http://activemq.apache.org/replicated-leveldb-store.html that the slave
with the latest update gets promoted to become the master. How does zk
knows which broker has the latest update ? Does zk get notifications from
the slaves for each update ?
Thanks,
Steven
--
View this message
this message to other slaves.
what does the parameter signifies ?
Thanks,
Steven
--
View this message in context:
http://activemq.2283324.n4.nabble.com/sync-option-in-replicated-leveldb-configuration-tp4686934.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
of one broker to another
broker while using replicated leveldb.
For ex. : Let's say 3 brokers are running fine and replicating the data. I
suddenly stop one slave broker. After some time before starting the slave
Can I manually copy leveldb directory of master and then start the slave ?
(I am
From the documentation here
http://activemq.apache.org/replicated-leveldb-store.html :
All messaging operations which require a sync to disk will wait for the
update to be replicated to a quorum of the nodes before completing. So if
you configure the store with replicas=3 then the quorum size
Please respond
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Zookeeper-for-replicated-LevelDB-tp4686797p4686882.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
(let
say 3) ActiveMQ brokers belong to same environment and fighting to become
master.
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Zookeeper-for-replicated-LevelDB-tp4686797p4686824.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Hi,
I have gone through
http://activemq.apache.org/replicated-leveldb-store.html.
What does zookeeper do other than election of master from ActiveMQ brokers ?
Who takes care of replicating data from master broker to other slaves ?
Thanks,
Anuj
--
View this message in context:
http
://activemq.apache.org/replicated-leveldb-store.html.
What does zookeeper do other than election of master from ActiveMQ brokers ?
Who takes care of replicating data from master broker to other slaves ?
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com
-Slave-with-Replicated-LevelDB-Store-tp4686450p4686583.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Hi,
Does clients need to make any changes or include any jars to connect to
ActiveMQ broker nodes which are using replicated levelDB. Since broker
instances connect to zookeeper to use replicated leveldb so just curious to
know are there any changes required on client side as well apart from
tests by changing the GC to G1 hopefully avoiding a
full GC and the demotion of the broker to slave forcing a failover.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Potential-Bug-in-Master-Slave-with-Replicated-LevelDB-Store-tp4686450p4686576.html
Sent from the ActiveMQ
Quick update:
I have enabled G1GC for the JVM running the broker and since then had no
problem again. The master broker stays master even under very heavy load.
So, my suggestion and recommendation when using replicated LevelDB would be
to use the G1 garbage collector significantly reducing stop
the
access via JNI may be less optimal than using an embedded Java (or Scala)
engine.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Potential-Bug-in-Master-Slave-with-Replicated-LevelDB-Store-tp4686450p4686583.html
Sent from the ActiveMQ - User mailing list archive
-Bug-in-Master-Slave-with-Replicated-LevelDB-Store-tp4686450p4686583.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
exactly is going on, but it appears that something is
wrong with the replicated LevelDB which needs more investigation.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Potential-Bug-in-Master-Slave-with-Replicated-LevelDB-Store-tp4686450p4686548.html
Sent from the ActiveMQ
and reporting to have been started as
slave.
I don't know what exactly is going on, but it appears that something is
wrong with the replicated LevelDB which needs more investigation.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Potential-Bug-in-Master-Slave
://activemq.2283324.n4.nabble.com/Potential-Bug-in-Master-Slave-with-Replicated-LevelDB-Store-tp4686450p4686488.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
raising a false alarm.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Potential-Bug-in-Master-Slave-with-Replicated-LevelDB-Store-tp4686450p4686492.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
the issue shows up again.
Until then, sorry for potentially raising a false alarm.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Potential-Bug-in-Master-Slave-with-Replicated-LevelDB-Store-tp4686450p4686492.html
Sent from the ActiveMQ - User mailing list archive
All nodes (ZK, AMQ) are running on CentOS 6.5 64bit with the latest OpenJDK.
Three broker form an active/passive cluster using replicated LevelDB store.
I have installed native LevelDB 1,7.0 accessing it via the JNDI driver.
The two cluster are forming a network of broker.
The networkConnectors
using replicated LevelDB store.
I have installed native LevelDB 1,7.0 accessing it via the JNDI driver.
The two cluster are forming a network of broker.
The networkConnectors are defined in the activemq.xml files in only one
cluster as duplex connections.
Here is my test case and the situation
.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-store-issues-tp4685793p4685838.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
--
Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
://activemq.2283324.n4.nabble.com/Replicated-LevelDB-store-issues-tp4685793p4685838.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
zkPath=/activemq/leveldb-stores
/
/persistenceAdapter
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-store-issues-tp4685793.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
-stores
/
/persistenceAdapter
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-store-issues-tp4685793.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
--
Founder/CEO Spinn3r.com
Location: *San
.nabble.com/Replicated-LevelDB-store-issues-tp4685793.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
--
Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
https://plus.google.com
1 - 100 of 148 matches
Mail list logo