I have a spring configured (XML) Camel project where where are many routes but the starting point of all the data is an sftp URI. E.g.
<from uri="sftp://{{gp.camel.sg.username}}@ {{gp.camel.sg.host}}:{{gp.camel.sg.port}}/{{gp.camel.sg.path}}?password={{gp.camel.sg.password}}&delete=true&exclusiveReadLockStrategy=#nonBlockingSftpReadLockStrategy&readLockCheckInterval=60000&readLockTimeout=360000000&filter=#sgHiFilter&reconnectDelay=30000&delay=60000"/> <to uri="file://{{gp.home}}/work/sg_decrypt/?tempFileName=${file:name}.partial"/> There are several more subsequent routes that are all file based except the very last one which calls a bean URI (web service call) This mostly works fine but what is happening is that Camel is not pulling the data from the SFTP server fast enough. At times the SFTP server can get a lot of files...say about 1000. We want those to be consumed by this process as soon as they are available on the SFTP server. The sftp route does check via the nonBlockingSftpReadLockStrategy bean that the file has not changed since the last time it was identified (e.g. second poll of file will return true for acquireExclusiveReadLock() if last modified date and file size has not changed. And the poll frequency is 60 seconds so we expect the file to be on the SFTP server for 2-3 minutes then be sent to the next route. What is happening is that when the SFTP server receives a lot of files the Camel route slows down and it only processes the files very slowly. I'm trying to understand why it would slow down. Btw, we can use mget to quickly get all the files from the SFTP server and put them in the same folder that the Camel route uses as its destination and that works fine. So why can't the sftp endpoint do the same thing? Also, we don't have any thread pools configured for these routes except for one where we have a muticast route configured for parallel processing. So all the rest should be using the default Camel thread pool. I did read online that the sftp input endpoint is single threaded so I'm wondering if that has something to do with the problem? But not sure how. We do want maximum concurrency with all the routes. -Dave