I found the cause of the SFTP endpoint failures, or rather I found a
solution. We are using Camel 2.8.2 which uses JSCH 0.1.41 and that was the
problem, when I updated JSCH to 0.1.51 it resolved the problem.
To be specific, the problem we were able to confirm in production was that
the
On Sun, Nov 22, 2015 at 3:29 AM, David Hoffer wrote:
> I'm not sure how to block the polling.
>
> Here is what seems like an ideal approach...the SFTP polling always runs on
> schedule and downloads files with single thread to a folder. This won't
> use much memory as its
Well, an OoMemEx says "allocating new memory failed". This has afaik no effect
to ather running process. Ok chances are high they will fail too in subsequent
processing steps if memory is not avail in general.
What is the root cause? Are you reading a hugh file into memory? Are you
spltting
I have more information on my case. It turns out that the reason the sftp
endpoint stopped poling and processing was that it got an out of memory
error. However the strange part is I would expect that to kill Camel and
everything would stop but that's not what was happening as other
I guess you need to block the polling while you process files in parallel. A
seda queue with a capacity limit will at least block the consumer. As I do not
know what exactly you are doing with the files, if always the same amount of
mem per file is required it's hard to tell what mem settings
Yes when the sftp read thread stops it was still processing files it had
previously downloaded. And since we can get so many files on each poll
(~1000) and we have to do a lot of decrypting of these files in subsequent
routes that its possible that the processing of the 1000 files is not done
I'm not sure how to block the polling.
Here is what seems like an ideal approach...the SFTP polling always runs on
schedule and downloads files with single thread to a folder. This won't
use much memory as its just copying one file at a time to the folder. Then
I'd have X threads take those
The files vary in size from 1MB to 40MB. The files are zips that have been
encrypted and then base64 encoded as txt file (don't ask why). So when we
get the files there is a lot of processing as each subsequent step creates
a byte array and passes it to Camel for the next step. After a few
Set queue size to 0 or -1 or to have no queue and a direct handover
with that synchronous queue. And then configure the rejected policy to
decide what to do if there is no free threads, such as reject or use
current thread etc.
On Fri, Nov 20, 2015 at 11:09 PM, David Hoffer
Hi!
when your sftp read threads stopps the files are still in process? In our env
we had something similar in conjunction with splitting large files because the
initial message is pending until all processing is completed. We solved it
using a seda queue (limited in size) in betweeen our sfpt
This part I'm not clear on and it raises more questions.
When using the JDK one generally uses the Executors factory methods to
create either a Fixed, Single or Cached thread tool. These will use a
SynchronousQueue for Cached pools and LinkedBlockingQueue for Fixed or
Single pools. In the case
I'm trying to understand the default Camel Thread Pool and how the
maxQueueSize is used, or more precisely what's it for?
I can't find any documentation on what this really is or how it's used. I
understand all the other parameters as they match what I'd expect from the
JDK...poolSize is the
Yes its part of JDK as it specifies the size of the worker queue, of
the thread pool (ThreadPoolExecutor)
For more docs see
http://camel.apache.org/threading-model.html
Or the Camel in Action books
On Fri, Nov 20, 2015 at 12:22 AM, David Hoffer wrote:
> I'm trying to
13 matches
Mail list logo