Thanks a lot for your help. 
Have another memory related question.

The scenario I have is that, I get a file with n number of records, I split
them using \n and stream them to a bean. I guess there would be a single
instance of the bean for each thread (I am using an Executer) in the memory.
I then pass the output from the bean (Which is also a string) to an
aggregater to concatinate them into buckets of 2000 using an
AggregaterStratergy. 
The performance I see is very slow.

Please let me know your thoughts on the below code.
Appreciate your help.

                from("file:/input?fileName=someFile.txt&delete=true")
                .log("Starting to process big file: ${header.CamelFileName}")
               
.split(body().tokenize("\n")).streaming().executorService(threadPool)
               
.bean(MyParser.class,"parseString").aggregate(header("CamelFileName"), new
MyAggregationStrategy()).completionSize(2000)
                
.executorService(threadPool1).to("jms:queue:test_request_3").end()
            .log("Sent all messages to queue");


The AggregationStratery does the below concatination.

exchange1.getIn().setBody(exchange1.getIn().getBody(String.class)+exchange2.getIn().getBody(String.class));

--
View this message in context: 
http://camel.465427.n5.nabble.com/Spliter-in-Camel-tp4940967p4942950.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to