Hi Mirco,

ad 1) If it saves you from duplicating code & you have no issues 
performance-wise: I would do it. It seems more logical (in my opinion).
If you need to check for completeness *before* doing any processing, I think 
that´s the only way to go. If you can allow for parallelism: split first & do 
you processing. Check for completeness afterwards and roll back if completeness 
check fails.

Ad 2) No.  Why would that be a problem?

Cheers, Thomas.

-----Ursprüngliche Nachricht-----
Von: mfrenzel [mailto:mirco.fren...@gmx.de] 
Gesendet: Donnerstag, 19. Mai 2016 15:00
An: users@camel.apache.org
Betreff: Splitter EIP Best Practice

Hello everyone, this is my first post and I am seeking the opinion of more 
experienced camel users.
In my route, I split up the content of a zip file and determine the 
completeness of it`s content by using the aggregator EIP, which applies certain 
business rules in it`s completion predicate. 
So far so good I think. After that I have to send individual REST calls for 
each file contained in the original zip, so I split up again what has been 
aggregated and set necessary http headers inside the splitters split method.

Here are my questions to you: 
first: when you need to determine (business rule) completeness of some data 
items, but need those items as individual messages afterwards...is aggregating 
to determine completeness and splitting up afterwards the way to go?
second: when splitting up merged data of some kind, is it bad practice to do 
something else on the split up messages (i.e. setting http headers) while 
you`re at it in the split method? 
I considered adding another bean/processor after the splitter to produce mainly 
redundant code, that`s why I did it in the splitter.

thx for your time



--
View this message in context: 
http://camel.465427.n5.nabble.com/Splitter-EIP-Best-Practice-tp5782787.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to