I suspect the biggest problem you may be having here is that the file isn't
truly streaming but getting slurped into memory as a whole.  I don't know
that for certain but that test of two queues should show you that.

That might be throwaway code or it might be the basis for a next
implementation.  But it will at least let you see what you're dealing
with.  I wouldn't go too far past just reading/streaming the file in and
passing it on to the next queue so you can verify that you are getting
individual data records.

On Fri, Apr 15, 2016 at 9:58 AM, Michele <michele.mazzi...@finconsgroup.com>
wrote:

> How big is that object on the queue?
> As optimization, I changed generic LinkedHashMap in a bean SerialNumber
> that
> has this properties
>
> public class SerialNumber implements Serializable {
>
>         private static final long serialVersionUID = -9067015746823050409L;
>
>         private Date arg1;
>         private Date arg2;
>         private String arg3;
>         private String arg4;
>
>         // getter setter
> }
>
> Is it only a single line of data? Yes
>
> I try with seda:processAndStoreInQueue?concurrentConsumers=30 but the
> result
> is the same.
> I attached scrennshots with memory usage and live threads.
> memory-usage.png
> <http://camel.465427.n5.nabble.com/file/n5781179/memory-usage.png>
> live-threads.png
> <http://camel.465427.n5.nabble.com/file/n5781179/live-threads.png>
>
> I will try decompose the problem into a  number of separate seda:queues to
> verify performance and memory usage and let you know.
>
> Thanks a lot again for your support.
>
> Best Regards
>
> Michele
>
>
>
>
>
>
>
> --
> View this message in context:
> http://camel.465427.n5.nabble.com/Best-Strategy-to-process-a-large-number-of-rows-in-File-tp5779856p5781179.html
> Sent from the Camel - Users mailing list archive at Nabble.com.
>

Reply via email to