hello, 

   I was planning to use the camel beanIO component , but I saw something in
its implementation that beats the reasoning of processing large files using
beanIO .

In readModels(Exchange exchange, InputStream stream) method

Object readObject;
            while ((readObject = in.read()) != null) {
                if (readObject instanceof BeanIOHeader) {
                    exchange.getOut().getHeaders().putAll(((BeanIOHeader)
readObject).getHeaders());
                }
                results.add(readObject);
            }
return results;
#
Here the List of objects are added and returned only after the whole file is
read, so in case of a very big file There could be memeory issues ?. Hope my
understanding is right .  
While in.read will only load one line at a time but the List of read Objects
will have a the whole file in memory. 




regards,
Felix T



--
View this message in context: 
http://camel.465427.n5.nabble.com/camel-beanIO-componenet-is-it-meant-for-parsing-large-files-tp5778470.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to