well - let's assume for a moment that reading in the file is not an issue then 
you could aggregate to a larger message and send that across. Why you get an 
OOME when creating the xml can't be seen from the information provided. 

if you want to keep a number of lines together you can use 
.split().tokenize("\n", <number>).streaming() from Camel 2.10 onwards where 
-<number> is the number of lines you would like to keep together (500 in your 
case).

But this won't help if it's IO so you might want to check that first. 

Maruan Sahyoun

Am 22.02.2013 um 08:09 schrieb cristisor <cristisor...@yahoo.com>:

> This is where I found the solution to aggregate the lines and process them in
> batches but I ran into the problems that I described above:
> - if I send an exchange with more than 1 line I have to make a lot of
> changes on the xml to xml mappers, choice processors, etc
> - even if I solve the first problem, if I read 500 lines at once and I
> create a big xml from the data I get into an OOME exception, so I should
> read up to 50 lines in order to make sure that no exceptions will arise 
> 
> So I want to use this solution to read 500 lines at a time but then split
> the big exchange into 500 exchanges and send them one by one to the xml
> mappers. Is there a way to split a big exchange into several exchanges and
> have each one behave like the initial one?
> 
> I'm suspecting that the I/O operations are the most time consuming because I
> saw in "Parsing large Files with Apache Camel" from catify.com that he
> raised the number of read lines per second from 200 to 4000.
> 
> I will do the tests that you suggested also.
> 
> Thank you for your reply,
> Cristian.
> 
> 
> 
> --
> View this message in context: 
> http://camel.465427.n5.nabble.com/Large-file-processing-with-Apache-Camel-tp5727977p5727992.html
> Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to