One thing to keep in mind when you are experiencing odering issues is
that you can use ActiveMQ's message groups to make all messages for
one group to the same consumer. Very often that little trick helps
and still allows some form of concurrency for different groups.

Andreas

On 09/05/2013 11:54 PM, gnani swami wrote:
Some of the messages where handled by wrong service and they are all queued
in FailedErrorMessages directory. Application failed to process them. So it
will end up in ordering, and lost message.  Looking at our various
exception it is quite easy for us to understand that they are caused by
concurrency bug inside the buz logic.

We like to move the <inOnly uri="direct://buzComponent" /> to new route
under some endpoint (in-mem-queue). Quick google doesn't return anything
related to multiplex, could you point me sample route?  Can I create that
multiplex in the new route under new endpoint.

Thanks



On Fri, Sep 6, 2013 at 12:35 AM, Christian Posta
<christian.po...@gmail.com>wrote:

So what's the underlying concurrency issue? Ordering? Can you just have a
route that multiplexes all of those incoming queues into a single queue (or
in-mem queue) and then have a route w/ only one thread that does CBR and
invoke the correct bean?


On Thu, Sep 5, 2013 at 9:02 AM, gnani swami <projectnash1...@gmail.com
wrote:
Hi,

Camel was wonderful piece of software and using it in production for more
than a year.  Our version of Camel was 2.10.1

I recently added few additional route (and bug in buz logic too) to
interface with additional systems, all the routes are doing the similar
job. They read JMS message from an unique-end-point (MQ JMS Queue) and
invoke buzComponent (Camel component).

But recently discovered that some subtle concurrency issues inside the
buzComponent causing trouble to us. And I am looking for some sort of
Camel
hack to prevent it from further damage by making all the end-point
consumer
to use one and only thread.

1) As a short term solution, we like to convert all the JMS component to
use single threaded threadPool (or taskExecutor), and that new thread
pool
shouldn't allow more than a thread across all the JMS end-point.  Is it
possible?  We are not worried about latency and performance for some
time.
2) if there is a possibility, our camel error handlers were already
in-place, Can we assume that single thread would continue working despite
any exception configured to errorHandler. ErrorHandlers will have its own
thread.

3) I am also happy to introduce one or more route and end-point inside
camelcontex.xml, so all the buzComponent can be redirected into that new
route to make it SingleThreaded execution somehow.. are there any
suggestion to use some  camel component (We could use any hack as long as
configurable inside xml).

I am pretty sure all the above approach may be short sighted, but looking
for temporary solution which could work for few days. We are happy to
modify Camel xml, than changing code and making it complex.

I appreciate feedback.

Regards
Mohan

<camelContext>
     <route id="route-one">
             <from uri="ibmmq1:queue:fromSourceOne" />
             <inOnly uri="direct://buzComponent" />
     </route>
     <route id="route-two">
             <from uri="ibmmq2:queue:fromSourceTwo" />
             <inOnly uri="direct://buzComponent" />
     </route>
     <!-- problematic root due to buzComponent locking-->
     <route id="high-speed-source-source-3">
             <from uri="ibmmq3:queue:fromHighSpeedSourceOne" />
             <inOnly uri="direct://buzComponent" />
     </route>
     <route id="high-speed-source-source-4">
             <from uri="ibmmq4:queue:fromHighSpeedSourceTwo" />
             <inOnly uri="direct://buzComponent" />
     </route>
</camelContext>

<bean id="ibmmq" class="org.apache.camel.component.jms.JmsComponent">
     <property name="connectionFactory">
         <bean class="com.ibm.mq.jms.MQConnectionFactory">
         ...
         </bean>
     </property>
</bean>



--
*Christian Posta*
http://www.christianposta.com/blog
twitter: @christianposta


Reply via email to