Hi Brad,

as you suggested and I split big files into chunks of 1000 lines changing my
route like this:

<route id="cr-cbikit-1">
                        <from
uri="{{uri.inbound}}?scheduler=quartz2&amp;scheduler.cron={{poll.consumer.scheduler}}&amp;scheduler.triggerId=FileRetriever&amp;scheduler.triggerGroup=IF_CBIKIT{{uri.inbound.options}}"
/>
                        <split streaming="true" parallelProcessing="false" >
                                <tokenize token="\r" group="1000" 
regex="true"/> 
                                <to uri="activemq:queue:Cbikit.Key" />          
        
                        </split>
                </route>
                
                <route id="cr-cbikit-2">
                        <from 
uri="activemq:queue:Cbikit.Key?destination.consumer.prefetchSize=1"
/>
                        <split streaming="true">
                                <tokenize token="\r" />
                                <to 
uri="sedaQueue:queue.ReadyToProcess?blockWhenFull=true" />
                        </split>
                </route>
                
                <route id="cr-cbikit-3">
                        <from 
uri="sedaQueue:queue.ReadyToProcess?concurrentConsumers=3" />
                        process....
                </route> 

This works fine! Memory Usage and Processing Time are very good.
But, I found a problem: Second route doesn't work when token is equal to \r.
When I log a message that contains 1000 lines I see 
line\rline2\rline3\r...line999
The split works when token is \\r. Why? The file affected by this issue is a
csv. 

Thanks a lot again for your support.

Best regards

Michele







--
View this message in context: 
http://camel.465427.n5.nabble.com/Best-Strategy-to-process-a-large-number-of-rows-in-File-tp5779856p5781750.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to