Hi all

Currently we are migrating one of our web applications from a
self-made-db-based-queueing to ActiveMQ (5.5).

After several days I could not achieve a real reliable working system and I
need some help.

The system is made out of several instances of one web application sharing
one database. We want to use an embedded broker persisted via jdbc running
on all instances.

Messages will be inserted to a queue by all instances and any one instance
should process them (may be the same or one other). 


Our primary requirements are:

* new messages must be in a transactional manner wir changes in the
database. If the DB-transaction fails the message should be discarded and
not be processed 
* if one server goes down messages created there but not yet processed must
be processed by another instance
* the application provides a monitoring site which is accessed via nagios
which warns when messages are stuck (age based on creation-timestamp)
* the brokers should be embedded in the spring-web app
* correct ordering of messages created on different instances is NOT an
issue

secondary goals are:
* each instance should have identical configuration to simplify deployment
* no multicast to avoid management of multicast-address-spaces
* if new instances come up they should dynamically join the network (one or
two primary brokers may of course be configured)
* if a new queue is configured in one instance it should be automatically
created (primary during development)

Regarding these rules the current set up is as follows (identical on every
node):


<?xml version="1.0" encoding="UTF-8"?>


        
                 
                      
                    
                
                
            
                
                
                        
                        
                
                
                        
                
                
                        
                
        
        
                
                
                        
                
        

        
                
                
                        
                
        

        
        
                
                
        

        
                
                
        
        

        



with jms.broker.initialConnectURIs=tcp://host1:8606,tcp://host2:8606

I thought it would act as follows:

- the first broker coming up will grab the db-lock and gets the master
- all others fail to get the db-lock and connect to the master-broker (and
each other)
- every broker will try to reach one of the both configured brokers and will
announce itself as a member and every client gets informed about that
- any message put in queue would be put in db by the local broker and then
gets transfered to the master for processing (and therefore no need for a
XA-Datasource)
- if the master goes down another one gets the lock and acts as the master
- the monitoring-page shows connections to all brokers

Instead the following occurs:

- a message inserted on the current master it gets processed
- a message inserted on a non-master gets queued but never processed before
this instance gets restartet and become master
- the monitoring-page shows a connection to the master for some time before
it disappears. 
- Every non master tries to get a db-lock until a timeout from the
database-server occurs. I assume that this is also the time when the
connection gets lost


So my questions are:

- what is wrong with this setup?
- are there better ways to achieve out requirements?

I hope that someone could help me with this problem...

Regards Michael

--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Problems-in-a-jdb-master-slave-environment-not-working-as-expected-tp3451798p3451798.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to