Hi

I have an application that simply put does this:

Ingoing:
1 - Fetch file from FTP->Local directory.
2 - Read file from Local directory, transform and split the file into
several messages and deliver to JMS queue.

Outgoing:
1 - Fetch messages from JMS queue as they are produced, transform and store
in local directory one by one.
2 - Twice a day files are read from local directory, compiled into a list
(one file) of messages and delivered to FTP.

The problem is that when something goes wrong it is a nightmare to figure
out what has been sent and recieved and what has not. The key issues are: We
do not want duplicates input to JMS queue. We do not want duplicates output
to FTP. And also traceability of the messages.

There are several spots where this can go wrong at the moment.
1 - If JMS -> File fails, message is lost. Can I use the transaction manager
here?
2 - If split fails from file -> JMS the messages already put to the JMS
queue will stay, and file will rollback. If we try and run the file again,
there will be duplicate messages on the queue. Can I use the transaction
manager here?
3 - If the Aggregator fails, the route that called it does not rollback, all
messages are lost, why aren't they being rolled back?

Basically what I want is for either JMS messages or files to always roll
completely back to the "queue" from where they were taken if something goes
wrong. That way I can control state of the process based on their position
in either a directory or a JMS queue. At the moment if there's an error I
need to go through logs and see what has ended up where, and retrieve
payloads from the logfiles.

Are these routes design dead, or can I fix this somehow. 
-- 
View this message in context: 
http://camel.465427.n5.nabble.com/How-to-improve-robustness-of-my-routes-tp3281265p3281265.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to