Hi,

We have 5 managed virtual servers with managed NetApp SAN Storage at our
hoster. On every server an instance of ActiveMQ 5.12.0 is running. We use
the NetApp as shared file system with kahaDB and mount a folder with NFSv4.
This is our persistenceAdapter configuration:

        <persistenceAdapter>
            <kahaDB directory="/mnt/apache/queue/"
lockKeepAlivePeriod="5000">
                <locker>
                   <shared-file-locker lockAcquireSleepInterval="10000"/>
                </locker>
            </kahaDB>
        </persistenceAdapter>

Whenever our hoster test the NetApp redundancy and switch between the NetApp
Nodes our ActiveMQ crashes. The following error appears in the log of
server1:

I2016-09-19 12:27:26,100 INFO  [ActiveMQ Lock KeepAlive Timer]
org.apache.activemq.util.LockFile: Lock file /mnt/apache/queue/lock, locked
at Wed Sep 14 18:49:26 CEST 2016, has been modified at Mon Sep 19 12:27:21
CEST 2016
I2016-09-19 12:27:26,101 ERROR [ActiveMQ Lock KeepAlive Timer]
org.apache.activemq.broker.LockableServiceSupport: osm1Broker, no longer
able to keep the exclusive lock so giving up being a master

Server5 took the Handle but after 12 Minutes the same logs appears. I think
there was another switch, but we don't get the info from our hoster. After
that activeMQ stops working. More and more messages are stored in queue.
After a complete restart, everythin works as expected.

Is there any misconfiguration or tips?
I thougt about switching to CephFS or GlusterFS to stay away from managed
NetApp. Are there any experiences?






--
View this message in context: 
http://activemq.2283324.n4.nabble.com/activeMQ-fails-on-NetApp-switch-tp4717558.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to