All,

We have two instances of Ofbiz running.  Both instances talk to the same
database.  When we update some entity using one of the instances we are
looking for cache clearing operation on all the instances to happen
automatically.  Based on Ofbiz documentation found on "Distriubted Cache
Clearing" we have setup the following.

 

Instance One we called the "master".

*         Instance Id: master

*         Pool: masterPool

Instance Two we called the "slave"

*         Instance Id: slave01

*         Pool: slave01Pool

 

Installed ActiveMq version 5.10.0 on the Master Instance.

 

On both the Master and Slave Instances the following:

1.    In Framework/base/lib/

                 Copied the 'activemq-all-5.10.0.jar

 

2.    In Framework/entity/config/entityengine.xml

 

                <delegator name="default" entity-model-reader="main"
entity-group-reader="main" entity-eca-reader="main"
distributed-cache-clear-enabled="true">

 

3.    In Framework/service/config/serviceengine.xml

<jms-service name="serviceMessenger" send-mode="all">

    <server jndi-server-name="default"

        jndi-name="topicConnectionFactory"

        topic-queue="OFBTopic"

        type="topic"

        listen="true"/>

</jms-service>

 

4.    In Framework/base/config/jndi.properties

 

java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextF
actory

java.naming.provider.url=tcp://<AMQ-IP-SERVER1>:61616

topic.OFBTopic=OFBTopic

connectionFactoryNames=connectionFactory, queueConnectionFactory,
topicConnectionFactory

 

Commented out the default jndi.properties

java.naming.factory.initial=com.sun.jndi.rmi.registry.RegistryContextFactory

java.naming.provider.url=rmi://127.0.0.1:1099

 

With this configuration both instances correctly send messages to the
'OFBTopic'.  This is verified by viewing the ActiveMQ admin monitor. http://
<http://%3cAMQ-IP-SERVER1%3e:8161/admin/topics.jsp>
<AMQ-IP-SERVER1>:8161/admin/topics.jsp 

 

However when updating for example the PRODUCT entity on the 'Master'
instance the 'cache' for the Product entity on the 'Slave' instance is NOT
cleared.

Has anyone succesfully configured the Ditributed Cache clear using ActiveMQ
in the past?

 

We have also tried a second configuration

 

1.    In Framework/base/lib/

                 Copied the 'activemq-all-5.10.0.jar

 

2.    In Framework/entity/config/entityengine.xml

 

                <delegator name="default" entity-model-reader="main"
entity-group-reader="main" entity-eca-reader="main"
distributed-cache-clear-enabled="true">

 

3.    In Framework/service/config/serviceengine.xml

<jms-service name="serviceMessenger" send-mode="all">

    <server jndi-server-name="activemq"

        jndi-name="ConnectionFactory"

        topic-queue="OFBTopic"

        type="topic"

        listen="true"/>

</jms-service>

4.    In Framework/base/config/jndiservers.xml

<jndi-server name="activemq">

 
Context-provider-url="tcp://<AMQ-IP-SERVER1>:61616?jms.useAsyncSend=true&amp
;timeout=6000"

        Initial-context-factory ="
org.apache.activemq.jndi.ActiveMQInitialContextFactory "

        url-pkg-prefixes=""

        security-principal=""

        security-credentials=""

/>

 

 

5.    In Framework/base/config/jndi.properties

 

java.naming.factory.initial=com.sun.jndi.rmi.registry.RegistryContextFactory

java.naming.provider.url=rmi://127.0.0.1:1099

topic.OFBTopic=OFBTopic

 

When using the second type configuration on 'startup' we receive the
following erro message in the log and no updates can be performed on the DB.

Failure in storeByCondition operation for entity [JobSandbox]  - SQL
Exception occurred on commit (Commit can not be set while enrolled in a
transaction)

Error in polling JobSandbox: [JobManager.java:188:ERROR] - SQL Exception
occurred on commit (Commit can not be set while enrolled in a transaction)

 

 

Any sugestions from the community would be of great help?

 

Len Shein

lsh...@solveda.com <mailto:lsh...@solveda.com> 

 

Office: 516.742.7888 ext.225

Home Office: 732.333.4303

Cell: 917.882.8515

 



 

Reply via email to