Hi all,

I'm trying to set up a pool of ActiveMQ instances and I'm having a hard time
figuring out some issues. It may be in the docs and I'm just not wrapping my
head around it. If so, I apologize but I think I need someone with more
experience than I to point me in the right direction.

To give you a sense of the environment, I have 4 headless linux servers,
with AMQ running on all four - the 01 node is running solo for the sake of
other apps/users on the environment. The 02, 03, and 04 nodes are pooled,
and once I get everything dialed in we'll move the pool to 01, 02, and 03,
and no longer use the single instance. Zookeeper is running on the 01, 02,
and 03 nodes, and is being used to back AMQ.

One of my issues is that Zookeeper doesn't seem to want to let go of nodes
sometimes: I can peer into ZK and see the contents of
/activemq/leveldb-stores and watch as nodes come and go when AMQ starts and
stops. Except sometimes ZK doesn't let go. At the moment, I have four nodes
showing, and so far as I know, only three instances of AMQ connecting. I can
go in and delete nodes by hand once AMQ is shut down, and sometimes (not
always) it complains that such-and-such node doesn't actually exist and then
cleans it up. If I leave nodes in place AMQ complains in the logs that it
was expecting 3 nodes and is seeing N nodes, where N matches what I see in
ZK.

On the AMQ side of things, I am getting the three nodes to start, but they
don't seem to be linked as I would expect. The contents of the activemq.log
files differ on the three nodes, as do the contents of the 'query' command.
I would expect the two slave nodes to be identical, but they aren't. At the
moment, two nodes are reporting an uptime of 1 day 3 hours, even though I
just restarted them. Those two are also reporting NodeRole=electing, but
different ZkPaths. The third is reporting as master, but is also reporting a
different brokerName from the other two.

The relevant parts of my conf/activemq.xml file are:
    <broker xmlns="http://activemq.apache.org/schema/core";
brokerName="activeMqBroker" dataDirectory="${activemq.data}">
    ...
        <persistenceAdapter>
            <replicatedLevelDB
                directory="${activemq.data}"
                replicas="3"
                bind="tcp://0.0.0.0:0"
                zkAddress="zkhost1:2181,zkhost2:2181,zkhost3:2181"
                zkPassword=""
                zkPath="/activemq/leveldb-stores"
                hostname="[hostname]" <- note that this points to the local
machine in each case.
            />
        </persistenceAdapter>
        ...
        <transportConnectors>
            <transportConnector name="openwire"
uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"
updateClusterClients="true"/>
            ...
        </transportConnectors>
        ...
    </broker>

My failover transport is:
failover://tcp://[node-02]:61616?wireFormat.maxInactivityDuration=90000

I'm assuming that with the updateClusterClients, I can add and remove
clients as they come and go.

Please let me know what other information I can provide.

Thanks!



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Failover-pool-setup-questions-tp4703467.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to