Adding to the above post...
I changed assert statement to the below, but still no luck (it should fail
but it's not)
*getMockEndpoint("mock:dataToParse").assertIsSatisfied();*
Appreciate your help.
Thanks & regards,
Ebe
--
View this message in context:
http://camel.465427.n5.nabble.com/Issue-C
Thanks for pointing out. Now I've moved to camel version 2.9.2 and started
getting exceptions.
Makes lot of sense now and I've cleared that issue.
Please can you also clarify the Mock issue I have.
I have the below test method and am using "CamelSpringTestSupport".
As I am setting the headers as
Hi All,
I have the below route:
*
${header.batchSize} == 2000
$'header.inputDataType' = 'TAP'
Hi All,
I am looking to set up some mock test using Apache camel and I see
contradictory behaviours.
Please advice.
I am not sure if the mock is doing what is should (Verify if the headers are
set properly).
${in.header.messageId} == 1
Hi Claus,
I aplogise for not understanding your comments (mutate the existing
oldExchange). Please help me understand if you have sometime.
I am doing as I did because of the following ...
1. I guess that the old exchange is null only once for every aggregation
block (i.e Since I have the compl
I am very sorry for not understanding this properly.
Please accept my apologies if am wasting your time.
I am totally missing something here.
Below is the change I had made (Hopeing that this is what you meant by
saying Mutate the old exchange).
I am using StringBuilder so that the process os fas
I have set it as
-Xms1024m -Xmx1024m
--
View this message in context:
http://camel.465427.n5.nabble.com/Java-heap-space-issue-with-Aggregation-tp5670608p5671375.html
Sent from the Camel - Users mailing list archive at Nabble.com.
Hi,
I have an scenario where I need to aggregate small files containing XML data
(Records) into large ones containing 5 xml data (Records).
But ended up in Java heap memory issue.
Is there a way to stream the records into a file till i reach 5 records
and then move on to create another
After running Eclipse Memory analyser, I see the below in the analysis data
...
One instance of "char[]" loaded by "" occupies
63,586,312 (87.23%) bytes
Please can someone help me is fixing this.
Thanks & regards,
Ebe
--
View this message in context:
http://camel.465427.n5.nabble.com/Java-hea
This is the error trace I see in my logs ...
org.apache.camel.CamelExchangeException: Error occurred during aggregation.
Exchange[null]. Caused by: [java.lang.OutOfMemoryError - Java heap space]
at
org.apache.camel.processor.aggregate.AggregateProcessor.doAggregation(AggregateProcessor.jav
Hi All,
I am trying to aggregate large number of xml files into files of 5
records.
I am getting java.lang.OutOfMemoryError - Java heap space error.
I am trying to see if there are any leaks but to my eyes i do not see any.
Appreciate your thoughts on this.
Aggreation logic:
public class G
Hi Claus,
I have another issue with the aggregator.
Hi,
I am trying the aggregation with completionSize set to 2000.
What happens is that the first output has 2000 records, but the second
output has 4000 records instead of the next 2000 records.
Is there someting am missing in my aggregator log
You are awesome.
Thanks a lot for the clue and it did the trick.
Below is the updated Aggregator code
public Exchange aggregate(Exchange exchange1, Exchange exchange2) {
try{
StringBuilder builder = new StringBuilder();
if (exc
Hi All,
Is there a way to capture the file name after it passes throught the
Aggregator.
I am aggregating contents of vaiour files. The output should go into another
file which has to have the same file of one of the processed file.
Looks like after passing through the aggregator, the file name
Thanks a lot Marco. It was what I was looking for. I had to pass the output
from Aggregator to a bean to wrap the aggregator output insise the header
and footer nodes.
Best regards,
Ebe
--
View this message in context:
http://camel.465427.n5.nabble.com/Merge-XML-files-tp5664818p5665474.html
Sent
Hi All,
I am trying to merge two xml files of the same format (i.e they contain the
same Node's but different data).
I use the tokenizeXML and tokenizePair utilities.
But not sure how to include the herder in the output xml file.
E.g I have 2 xml files like the below ...
http://www.ipdr.org/na
Thanks, I tried using gzip (As we moved to Solaris) and it does work as
expected.
But my issue is to compress before moving the files to back up.
The above endpoint moves the file as such to the directory
"/work_dir/data/input/bkp/tap". But we want the file to be compressed before
being backed u
Hi All,
For some reason I am not able to see any of the logs defined in the below
route.
Appreciate your help in solving this.
Noticed that if I removed the choice from the second route, all the logging
appears in my log file.
I need to find out which choice loop is executed.
I see something strange here.
If I have the below to routes, the logginf works perfectly.
${file:path}
${file:name}
ltel2
Hi All,
I am trying to set few header's, but does not seem to work.
I guess I am missing something but not able to find that out.
The below logs age not printted in the log file.
${file:path}
${file:name}
ltel2
Apologies anf thanks.
But enabling Streamcache also did not solve the purpose.
Sorry to bother you, but is there something wrong with the below code.
Note: Input is a Large String.
${in.header.batchSize} == 2000
Yes, but according to the Camel documentation, it says "Multicast will
implicitly cache streams to ensure that all the endpoints can access the
message content "
http://camel.apache.org/stream-caching.html
--
View this message in context:
http://camel.465427.n5.nabble.com/Using-Zip-dataformat-
I tried using the java DSL to compress file using zip and it did actually
compress these files. But I was not able to extract them. Looks like they
are compressed but not the same way as a normal zip file.
What I am trying to do is something different. The input is actually a large
String and not
Hi, I am trying to use Zip / GZip data format to compress the output files
using Spring DSL implementation.
But looks like the files are not compressed. Not sure what mistake I am
making.
Appreciate your help.
Thanks & regards,
Ebe
--
View this message i
Hi All,
I guess there is not TypeConverter available in Camel to convert
StringBuilder to InputStream.
As most of the Aggregation would involve StringBuilder, It would be nice to
have this TypConverter also available.
Thanks & regards,
Ebe
--
View this message in context:
http://camel.465427.n5
Hi All,
I am looking to compare the CamelBatchSize available in the Exchange header
to an integer using Spring DSL.
I tried various options but not getting it right.
1. ${header.batchSize}==2000
2. ${in.header.batchSize}==2000
I tried loging the above values but they were empty.
processing PAR
Thanks a lot William. I resolved this by just passing the Headers to the bean
method. This made sure the original Exchange is not changed.
public TracerEntity traceExchange(@Headers Map headers)
throws Exception {
--
View this message in context:
http://camel.465427.n5.nabble.com/Intercept-issue
Thanks a lot for the quick reply. Is there a similar annotation to pass
Exchange Properties.
--
View this message in context:
http://camel.465427.n5.nabble.com/Passing-Header-Properties-to-Bean-tp5089208p5089239.html
Sent from the Camel - Users mailing list archive at Nabble.com.
Hi All,
I am using wireTap to log the header / properties details.
I also want to make sure I do not copy the whole body of the message as that
would cause memory related issues when dealing with large number of huge
input files.
I am trying to understand the behind the scene work of Camel.
I tr
Hi Raul / William, Thanks for your help.
Actually sending the intercept to seda queue also did not do the trick. It
still messes up the original exchange.
I tried creating a new ProducerTemplate, but that also did not work.
If I just do a Stream:out (DSL) or System.out.print in (Java DSL) the
ori
Looks like the JPA connection does not close after persisting the data in the
database. Not sure how to resolve this.
Appreciate any help.
--
View this message in context:
http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5086885.html
Sent from the Camel - Users mailing list
Hi Raul, Sorry for asking again.
For some reason the processed file is not being moved after being processed
(i.e the route just hangs after printing the log "Persisting to Trace to
Database").
The interesting this is that the Trace data is created in the database table
and the file is actually
Thanks a lot Raul. It did the trick. Happy weekend to you.
--
View this message in context:
http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5081424.html
Sent from the Camel - Users mailing list archive at Nabble.com.
Awesome. This is clear now, but the sad part is am not able isolate the
interception Processor with the route processor. As I want to send an Entity
to the JPA using a Processor, I need to set it as the Exchange Body which
again would mess up the other route.
>From [1] I guess that setting the bea
I am assuming that unless I set the body of the exchange to what is processed
by the bean called within the interceptor, the exchange should not change.
Is this not true?
All my bean does is return and object. But it's affecting the exchange.
/ public MarsTracerEntity traceExchange(Exchange
I have a strange issue that am not able to solve.
Appreciate your help.
I am trying to intercept and persist certain Header data to the Database
(Please see the below Route config).
I see the below log in my logger, which am not able to understand why. I am
not setting the "com.vzw.fp.mars.entiti
Not sure if only one processor in allowed withing a Camel context.
Having the below intercept code which would call a different Processor
"marsTraceProcessor" to create a JPA Entity is actually messing up the
Processor within the actual route that processes the message.
I am having trouble with fitting in the jpa entity. Not able to find the
correct syntax to do it.
I tried the below, but no data went into the database. The println's do
print out the entity data.
@Override
public void process(Exchange exchange) throws Exception {
Hi All,
For some reason the expression "${header.CamelFileName}" is not getting
recoganised.
I am tring to set my own file name to the output file.
Please can you ponit out if am missing something here.
ProducerTemplate prod =
exchange.getContext().createProducerTemplate();
Thanks a lot. I was thinking of sending a SEDA queue instance to JNI which
would put messeges on to it.
I'll have another route in Camel which would consume out of this SEDA queue.
Not sure if this is possible or a total bad idea.
Also I was thinking that instead of creating the SEDA queue instanc
Hi All,
I am looking to do the following.
1. pass a seda component from CamelContext to a plain java bean
2. The java bean would pass this Queue component to JNI component.
Is there a way to do this.
Thanks & regards,
Ebe
--
View this message in context:
http://camel.465427.n5.nabble.com/Passi
Hi,
I am using the below code to read a file and send the processed contents to
MQ.
What I see is that (Using Yourkit profiler) for every message to MQ, a
socket connection is being made.
I suppose that only 1 socket connection should be made.
How do I achive this.
Appreciate your help.
this.g
I am finding that just reading the file of 2k records just takes 1
millisecond.
Code:
from("file:C:\\camelProject\\data\\inbox\\mars")
.log("Starting to process big file: ${header.CamelFileName}")
.bean(MarsParser.class,"parseMarsData")
.to(
Thanks a lot. This was what I was looking for.
But the performance results were not what I was looking.
To read a file, Camel is taking lot of time. I got the below numbers from
using Yourkit profiler.
File of 10k records : 635 milliseconds
File of 20k records : 557 milliseconds
File of 40k reco
Also, I tested the time taken by Camle route to read a file containing 10,000
lines of data records using YourKit profiler. It's takin more than 100
milliseconds in average.
Is this the bottom line for a file reader in Camel?
Thanks & regards,
Ebe http://camel.465427.n5.nabble.com/file/n4985239/
I am using a Timer.
Sorry as a begnier, please let me know what would be the best way to start
the route and it should keep running all day long.
This is to read and process any file that comes into the directory.
private final Timer timer = new Timer();
public void start() {
ti
Hi,
I have the below type converters in my util.
1. Convert | de-limited String to List of tokens
2. Convert String builder to String.
CsvDataFormat csv = new CsvDataFormat();
CSVStrategy strategy = CSVStrategy.DEFAULT_STRATEGY;
strategy.setDelimite
Thanks a lot. Yes it is the granularity.
I also saw the below in the logs which I guess decreases the performance.
What I am doing is
1. reading a file
2. Spliting them using token "\n"
3. Unmarshaling each line using the "|" delimiter
(.unmarshal(csv).convertBodyTo(List.class).aggregate(constan
Thanks a lot.
Below is the code am using and I get this error message. Not sure what is
missing.
from("file:C:\\camelProject\\data\\inbox\\")
.log("Starting to process big file: ${header.CamelFileName}")
.split(body().tokenize("\n")).streaming()
Hi All,
Appreciate your help.
I have a bean that returns a String or a StringBuilder. I want to aggregate
the first 2000 String's returned by this method into a single file.
I am having trouble with finding the right correlationExpression to
aggregate them.
Please can you help me fix this.
Tha
Looks like this happens in a random sequence. I passed in 5 files and below
are the logs.
I guess it's to do with some cpu action like garbage collection.
Is there a way to make it consistant except for the first read?
2011-11-07 08:45:53,553 INFO [Camel (camel-1) thread #0 -
file://C:camelProje
I also tried the below. For some reason tie time taken from printing the
"Starting to process big file" to calling the bean method "parseMarsData"
takes an average of 15 milli seconds.
The input file has 2000 lines of data.
I am just passing them as a string to the method "parseMarsData" which do
Hi,
I am converting a file of 2000 records to String using the below camel api.
String input = exchange.getIn().getBody(String.class);
But I see that it's taking an average of 15 milliseconds. Is there anyway to
impreove this. I am looking for times around 1 millisecond.
Appriciate your help.
Sorry all. Please ignore the above thread. It was an issue with the way I had
created the myList object in the MyListAggregation (AggregationStrategy)
class. I had it as a global variable.
The below works...
public Exchange aggregate(Exchange oldExch, Exchange newExch) {
t
The aggregation I am using is
public Exchange aggregate(Exchange oldExch, Exchange newExch) {
try{
if (oldExch == null) {
oldExch = new DefaultExchange(new
DefaultCamelContext());
oldExch.getIn().setBody(myLi
The below logs show the various size's of the list. Looks like the below line
of code has something wrong in it.
Please help solve this.
*from("file:/work_dir/camel_proj/data/input?delete=true")
.log("Starting to process big file: ${header.CamelFileName}")
.split(b
Thanks a lot.
I am using the "completionSize" option in the Aggregation and i've set it to
2000. But in the log above you see that the size of the list that is passed
to the bean "parseList" is little more than 2000.
I am not sure if i have something wrong in the above route.
Appreciate your hel
Hi,
I have a wierd situation.
The method is syncronized and the List is also Syncronized, but I get a
ConcurrentModificationException exception.
Appreciate your help.
Thanks & regards,
Ebe
Camel Route:
from("file:/vzwhome/c0sineb/work_dir/camel_proj/data/input?delete=true")
Ah ok. I was trying to get some performance statistics and was using a single
thread.
Any tips on increasing performance when using Camel (I would use thread
pool).
My task would be to read a file (csv or xml), spilt them into single
records, process these records, aggregate 2000 records into 1 an
Thanks a lot all of you.
As you may see the code in above in this thread, I am splitting based on the
new line token and sending it to a bean.
I noticed som gap in the processing times. I 5 messages beeing processed in
a single millisecond and there is a gap of 15 milliseconds before the next 5
m
Thanks a lot Christian.
I just noticed that adding an aggregater slows down the process and it's
huge.
Without the aggregation, the time taken between parsing each line of data is
in nanoseconds (very negligible), but as I add the aggregarion to it, the
time taken by between parsing each line of d
Thanks a lot for your help.
Have another memory related question.
The scenario I have is that, I get a file with n number of records, I split
them using \n and stream them to a bean. I guess there would be a single
instance of the bean for each thread (I am using an Executer) in the memory.
I the
Hi,
I have a file of 300,000 records and i use the split mechanism of Camel to
split them and sends each record to a processer.
Does Camel store these records on a heap or somewhere before it sends them
to the processer. How does Camel splitter internally work.
I want to make sure that the Split
Thanks a lot. It works now.
--
View this message in context:
http://camel.465427.n5.nabble.com/camel-2-4-with-spring-2-5-6-TaskExecutor-issue-tp3237897p4933214.html
Sent from the Camel - Users mailing list archive at Nabble.com.
Hi, Is this issue resolved.
Please advise.
I am using the following versions
Camel: 2.8.1
Spring: 2.5.6
I am just trying to consume messeges on a queue write it on a SEDA. But I
get the following error.
context.addRoutes(new RouteBuilder() {
public void conf
Thanks a lot Williem.
I just saw that we could have different expressions passed on the filename
parameter, but they do not work.
Example:
context.addRoutes(new RouteBuilder() {
public void configure() {
from("file:C:\\camelProject\\data\\ou
Hi All,
I am trying to Marshal and Unmarshal using the zip format.
When I marshal the files to a Zip format, the file name does not changed
though I see it compressed in the output folder. I was able to change the
name using the fileNameOption to *.zip.
But this created a problem when I try to un
I am using the Camel Version 2.8.1 and looks like having delete and
onException together does not work.
I have a invalid XML (With lots of closing tags missing). The valid XML's
are processed and sent to the queue, but the invalid XML is not processed,
but all the files get deleted.
I am new to Ca
Appreciate your help.
For some reason each os the polling consumers poll for few seconds and then
terminate.
context.addRoutes(new RouteBuilder() {
public void configure() {
from("quartz://myTimer?trigger.repeatInterval=2000&trigger.repeatCount=-1")
Thanks a lot. My main concern is how to implement endless polling on a
directory for files.
Not quite understanding the behind work of Camel PlooingConsumer.
I tried various ways but, the process terminates as soon as the current set
of files have been read and processed.
Appreciate your help.
Am trying to play around with Camel and this is my first project using it.
The problem am trying to resolve is,
1. Poll a particular directory for files (This should be running all the
time).
2. Convert them into Messages and write them to a SEDA (In-memory queue)
3. Read from the
Hi,
I am having trouble moving a file to a back up location after it's consumed
and it's contents sent to a queue.
I am new to Camle world and exploring it. Below is the code am using.
*Spring config:*
file:C:\\camelProject\\data\\inbox?move=C:\\camelProject\\data\\inbox\\bkp
72 matches
Mail list logo