Re: Issue: Choice not working
Adding to the above post... I changed assert statement to the below, but still no luck (it should fail but it's not) *getMockEndpoint("mock:dataToParse").assertIsSatisfied();* Appreciate your help. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Issue-Choice-not-working-tp5712978p5713135.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Issue: Choice not working
Thanks for pointing out. Now I've moved to camel version 2.9.2 and started getting exceptions. Makes lot of sense now and I've cleared that issue. Please can you also clarify the Mock issue I have. I have the below test method and am using "CamelSpringTestSupport". As I am setting the headers as below, I would expect the test to fail but for some reason it is passing. Is there any thing missing or am I totally doing some silly thing here. 2 2000 TDR *@Test public void testMockC() throws Exception{ context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { mockEndpoints(); } }); getMockEndpoint("mock:direct:parsedData").expectedHeaderReceived("inputDataType", "TAS"); getMockEndpoint("mock:direct:parsedData").expectedHeaderReceived("messageId", 2); getMockEndpoint("mock:direct:parsedData").expectedHeaderReceived("batchSize", 4000); assertMockEndpointsSatisfied(); assertNotNull(context.hasEndpoint("mock:direct:parsedData")); }* -- View this message in context: http://camel.465427.n5.nabble.com/Issue-Choice-not-working-tp5712978p5713045.html Sent from the Camel - Users mailing list archive at Nabble.com.
Issue: Choice not working
Hi All, I have the below route: * ${header.batchSize} == 2000 $'header.inputDataType' = 'TAP' $'header.inputDataType' = 'TDR' * But for some reason, the first part " $'header.inputDataType' = 'TAP' " is being executed. The log shows that the header data is set properly. Please help out. I am breaking my head with this for hours now not knowing what's really wrong. 2012-05-21 11:11:43,011 INFO [Camel (camel-1) thread #0 - file:///tdr/in] route3- processing batch of size == 2000 --- [TDR] 2012-05-21 11:11:43,011 INFO [Camel (camel-1) thread #0 - file:///tdr/in] route3- persisting TAP file ID-WNY46231LBITDBO-1074-1337613100949-0-2 2012-05-21 11:11:43,995 INFO [main] parser.LteL0TDRParserTest - 2012-05-21 11:11:43,995 INFO [main] parser.LteL0TDRParserTest - Testing done: sendData(com.vzw.fp.camel.parser.LteL0TDRParserTest) 2012-05-21 11:11:43,995 INFO [main] parser.LteL0TDRParserTest - Took: 2.015 seconds (2015 millis) 2012-05-21 11:11:43,995 INFO [main] parser.LteL0TDRParserTest - Thanks, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Issue-Choice-not-working-tp5712978.html Sent from the Camel - Users mailing list archive at Nabble.com.
Mock Testing
Hi All, I am looking to set up some mock test using Apache camel and I see contradictory behaviours. Please advice. I am not sure if the mock is doing what is should (Verify if the headers are set properly). ${in.header.messageId} == 1 TAP ${in.header.messageId} == 2 TDR *Correct flow:* I am passing the messageId =2 which goes into the "TDR" flow and sets the Header "contentType". This is happening correctly (log snippet below). >From Log: 2012-05-18 14:41:02,039 INFO [Camel (camel-1) thread #0 - file:///tdr/in] route2- processing TDR file *Issue: *The below test result is PASS which should actually be FAIL as the contentType set is TDR. @Test public void testMockC() throws Exception{ context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { mockEndpoints("mockParsedData"); } }); getMockEndpoint("mock:mockParsedData").expectedHeaderReceived("contentType", "TAP"); getMockEndpoint("mock:mockParsedData").expectedHeaderReceived("messageId", 2); getMockEndpoint("mock:mockParsedData").expectedHeaderReceived("batchSize", 2000); assertMockEndpointsSatisfied(); assertNotNull(context.hasEndpoint("mock:mockParsedData")); } 2nd Issue: ${header.batchSize} == 2000 ${in.header.contentType} contains 'TAP' The log shows contradictory flow. Same thread but the log shows the cheader "contentType" is set to 'TDR' where as the wrong choice block is executed. 2012-05-18 14:41:02,039 INFO [Camel (camel-1) thread #0 - file:///tdr/in] route3- processing batch of size == 2000 --- [TDR] 2012-05-18 14:41:02,039 INFO [Camel (camel-1) thread #0 - file:///tdr/in] route3- persisting TAP file ID-WNY46231LBITDBO-3571-1337366460383-0-2 Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Mock-Testing-tp5712080.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Java heap space issue with Aggregation
Hi Claus, I aplogise for not understanding your comments (mutate the existing oldExchange). Please help me understand if you have sometime. I am doing as I did because of the following ... 1. I guess that the old exchange is null only once for every aggregation block (i.e Since I have the completion size as 5, the old exchange will be null once every 5000 data). I guess this would not be a overhead causing the outage. 2. Since there is no defauls converter to StringBuilder, I had to make the oldExchange contents as StringBuilder as it enhances the performance by a big margin. But now have come to a conclusion that Java cannot handle this big a data (5 payload). But the concern really is that it was not able to handle 5000 payloads. Size of a payload would be equal or less than a swaps product entity. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Java-heap-space-issue-with-Aggregation-tp5670608p5676295.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Java heap space issue with Aggregation
I am very sorry for not understanding this properly. Please accept my apologies if am wasting your time. I am totally missing something here. Below is the change I had made (Hopeing that this is what you meant by saying Mutate the old exchange). I am using StringBuilder so that the process os faster and also helps not creating many String objects. 1. when I use String.class all works fine until am running out of heapspace. 2. If I use StringBuilder, I get the following nullPointer. 3. Looks like there is no converter type to conver from String to StringBuilder. 2012-04-30 10:34:38,087 INFO [Camel (camel-1) thread #2 - seda://streamQueue] ipdr.GlobalAggrStratergy - is oldExchange null: Exchange[null] AND is newExchange null: Exchange[null] 2012-04-30 10:34:38,087 INFO [Camel (camel-1) thread #2 - seda://streamQueue] ipdr.GlobalAggrStratergy - Aggregate old orders: null 2012-04-30 10:34:38,087 INFO [Camel (camel-1) thread #2 - seda://streamQueue] ipdr.GlobalAggrStratergy - Aggregate new order: null 2012-04-30 10:34:38,087 ERROR [Camel (camel-1) thread #2 - seda://streamQueue] ipdr.GlobalAggrStratergy - Error aggregating java.lang.NullPointerException at com.vzw.fp.ipdr.GlobalAggrStratergy.aggregate(GlobalAggrStratergy.java:26) @Override public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { try{ log.info(" is oldExchange null: "+oldExchange+" AND is newExchange null: "+newExchange); if (oldExchange == null) { return newExchange; } else { log.info("Aggregate old orders: " + oldExchange.getIn().getBody(StringBuilder.class)); log.info("Aggregate new order: " + newExchange.getIn().getBody(StringBuilder.class)); oldExchange.getIn().setBody( oldExchange.getIn().getBody(StringBuilder.class). append(newExchange.getIn().getBody(StringBuilder.class)+"\n")); counter++; } }catch(Exception ex){ log.error("Error aggregating", ex); } oldExchange.setProperty(Exchange.CONTENT_LENGTH, counter); if(counter >= 5) counter = 0; oldExchange.getIn().setHeader(Exchange.FILE_NAME_ONLY, newExchange.getProperty(Exchange.FILE_NAME_ONLY)); return oldExchange; } -- View this message in context: http://camel.465427.n5.nabble.com/Java-heap-space-issue-with-Aggregation-tp5670608p5676110.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Java heap space issue with Aggregation
I have set it as -Xms1024m -Xmx1024m -- View this message in context: http://camel.465427.n5.nabble.com/Java-heap-space-issue-with-Aggregation-tp5670608p5671375.html Sent from the Camel - Users mailing list archive at Nabble.com.
Aggregating huge files
Hi, I have an scenario where I need to aggregate small files containing XML data (Records) into large ones containing 5 xml data (Records). But ended up in Java heap memory issue. Is there a way to stream the records into a file till i reach 5 records and then move on to create another file and start writing to it. Appreciate your help. Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Aggregating-huge-files-tp5671368p5671368.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Java heap space issue with Aggregation
After running Eclipse Memory analyser, I see the below in the analysis data ... One instance of "char[]" loaded by "" occupies 63,586,312 (87.23%) bytes Please can someone help me is fixing this. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Java-heap-space-issue-with-Aggregation-tp5670608p5670938.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Java heap space issue with Aggregation
This is the error trace I see in my logs ... org.apache.camel.CamelExchangeException: Error occurred during aggregation. Exchange[null]. Caused by: [java.lang.OutOfMemoryError - Java heap space] at org.apache.camel.processor.aggregate.AggregateProcessor.doAggregation(AggregateProcessor.java:243) at org.apache.camel.processor.aggregate.AggregateProcessor.process(AggregateProcessor.java:197) at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:71) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:91) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) at org.apache.camel.processor.RedeliveryErrorHandler.processErrorHandler(RedeliveryErrorHandler.java: -- View this message in context: http://camel.465427.n5.nabble.com/Java-heap-space-issue-with-Aggregation-tp5670608p5670850.html Sent from the Camel - Users mailing list archive at Nabble.com.
Java heap space issue with Aggregation
Hi All, I am trying to aggregate large number of xml files into files of 5 records. I am getting java.lang.OutOfMemoryError - Java heap space error. I am trying to see if there are any leaks but to my eyes i do not see any. Appreciate your thoughts on this. Aggreation logic: public class GlobalAggrStratergy implements AggregationStrategy { private static Logger log = Logger.getLogger(GlobalAggrStratergy.class); int counter = 0; @Override public Exchange aggregate(Exchange exchange1, Exchange exchange2) { try{ StringBuilder builder; if (exchange1 == null || null == exchange1.getIn().getBody()) { builder = new StringBuilder(); exchange1 = new DefaultExchange(new DefaultCamelContext()); exchange1.getIn().setBody(builder); } builder = exchange1.getIn().getBody(StringBuilder.class); builder.append(exchange2.getIn().getBody()+"\n"); exchange1.getIn().setBody(builder); exchange1.getIn().setHeader(Exchange.FILE_NAME_ONLY, exchange2.getProperty(Exchange.FILE_NAME_ONLY)); counter++; }catch(Exception ex){ log.error("Error aggregating", ex); } exchange1.setProperty(Exchange.BATCH_SIZE, counter); if(counter >= 5) counter = 0; return exchange1; } Route configuration: public void configure() throws Exception { from("direct:producerQueue").log("File name: ${in.header.fileName}") .setProperty(Exchange.FILE_NAME_ONLY, simple("${file:onlyname.noext}")) .split().tokenizeXML("IPDR").streaming() .aggregate(header("messageId"), new GlobalAggrStratergy()).completionSize(5).completionTimeout(2) .process(new IPDRHeaderFooterProcessor()) .to(IPDRUtil.getInstance().getProperty("IPDROutputDir")); } Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Java-heap-space-issue-with-Aggregation-tp5670608p5670608.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: File Name through Aggregator
Hi Claus, I have another issue with the aggregator. Hi, I am trying the aggregation with completionSize set to 2000. What happens is that the first output has 2000 records, but the second output has 4000 records instead of the next 2000 records. Is there someting am missing in my aggregator logic ... Appreciate your help. am using the same aggregation logic given in this thread. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/File-Name-through-Aggregator-tp5665526p5668183.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: File Name through Aggregator
You are awesome. Thanks a lot for the clue and it did the trick. Below is the updated Aggregator code public Exchange aggregate(Exchange exchange1, Exchange exchange2) { try{ StringBuilder builder = new StringBuilder(); if (exchange1 == null || null == exchange1.getIn().getBody()) { exchange1 = new DefaultExchange(new DefaultCamelContext()); exchange1.getIn().setBody(builder); } builder = exchange1.getIn().getBody(StringBuilder.class); builder.append(exchange2.getIn().getBody()+"\n"); exchange1.getIn().setBody(builder); exchange1.getIn().setHeader(Exchange.FILE_NAME_ONLY, exchange2.getProperty(Exchange.FILE_NAME_ONLY)); counter++; }catch(Exception ex){ log.error("Error aggregating", ex); } exchange1.setProperty(Exchange.BATCH_SIZE, counter); return exchange1; } -- View this message in context: http://camel.465427.n5.nabble.com/File-Name-through-Aggregator-tp5665526p5665581.html Sent from the Camel - Users mailing list archive at Nabble.com.
File Name through Aggregator
Hi All, Is there a way to capture the file name after it passes throught the Aggregator. I am aggregating contents of vaiour files. The output should go into another file which has to have the same file of one of the processed file. Looks like after passing through the aggregator, the file name is no longer available in the Exchange. I tried setting it in the header, but no luck. Appreciate your help. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/File-Name-through-Aggregator-tp5665526p5665526.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Merge XML files
Thanks a lot Marco. It was what I was looking for. I had to pass the output from Aggregator to a bean to wrap the aggregator output insise the header and footer nodes. Best regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Merge-XML-files-tp5664818p5665474.html Sent from the Camel - Users mailing list archive at Nabble.com.
Merge XML files
Hi All, I am trying to merge two xml files of the same format (i.e they contain the same Node's but different data). I use the tokenizeXML and tokenizePair utilities. But not sure how to include the herder in the output xml file. E.g I have 2 xml files like the below ... http://www.ipdr.org/namespaces/ipdr"; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="http://www.ipdr.org/namespaces/ipdr EricssonIs835v1.11.xsd" seqNum="270" version="2.0"> I need the output as ... http://www.ipdr.org/namespaces/ipdr"; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="http://www.ipdr.org/namespaces/ipdr EricssonIs835v1.11.xsd" seqNum="270" version="2.0"> With the tokenizeXML and tokenizePair as mentione in Claus Isben's blog (http://davsclaus.blogspot.com/2011/11/splitting-big-xml-files-with-apache.html), I was not able to get the below part of the xml into the output. Appreciate your help. http://www.ipdr.org/namespaces/ipdr"; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="http://www.ipdr.org/namespaces/ipdr EricssonIs835v1.11.xsd" seqNum="270" version="2.0"> Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Merge-XML-files-tp5664818p5664818.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Compressing file after Move
Thanks, I tried using gzip (As we moved to Solaris) and it does work as expected. But my issue is to compress before moving the files to back up. The above endpoint moves the file as such to the directory "/work_dir/data/input/bkp/tap". But we want the file to be compressed before being backed up. Is there a way of doing this. -- View this message in context: http://camel.465427.n5.nabble.com/Compressing-file-after-Move-tp5451023p5451388.html Sent from the Camel - Users mailing list archive at Nabble.com.
Issue with logging
Hi All, For some reason I am not able to see any of the logs defined in the below route. Appreciate your help in solving this. Noticed that if I removed the choice from the second route, all the logging appears in my log file. I need to find out which choice loop is executed. ${file:path} ${file:name} ltel2 ${header.fileType} == "ltel2" ${header.fileType} == "ltel3" Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Issue-with-logging-tp5158558p5158558.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: using values from a properties-file in setHeader-method
I see something strange here. If I have the below to routes, the logginf works perfectly. ${file:path} ${file:name} ltel2 But *NOT* if I have the second route as below. Do not know what difference it makes in the flow. I am not seeing any of the logs. ${header.fileType} == "ltel2" Also please adive if the above Choice evaluation is correct. -- View this message in context: http://camel.465427.n5.nabble.com/using-values-from-a-properties-file-in-setHeader-method-tp5154287p5157928.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: using values from a properties-file in setHeader-method
Hi All, I am trying to set few header's, but does not seem to work. I guess I am missing something but not able to find that out. The below logs age not printted in the log file. ${file:path} ${file:name} ltel2 Appriciate your help. Ebe -- View this message in context: http://camel.465427.n5.nabble.com/using-values-from-a-properties-file-in-setHeader-method-tp5154287p5157868.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Using Zip dataformat
Apologies anf thanks. But enabling Streamcache also did not solve the purpose. Sorry to bother you, but is there something wrong with the below code. Note: Input is a Large String. ${in.header.batchSize} == 2000 ${in.header.batchSize} != 2000 true -- View this message in context: http://camel.465427.n5.nabble.com/Using-Zip-dataformat-tp5103911p5105513.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Using Zip dataformat
Yes, but according to the Camel documentation, it says "Multicast will implicitly cache streams to ensure that all the endpoints can access the message content " http://camel.apache.org/stream-caching.html -- View this message in context: http://camel.465427.n5.nabble.com/Using-Zip-dataformat-tp5103911p5105444.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Using Zip dataformat
I tried using the java DSL to compress file using zip and it did actually compress these files. But I was not able to extract them. Looks like they are compressed but not the same way as a normal zip file. What I am trying to do is something different. The input is actually a large String and not an actual file. In this case the compression did not happen. Note: Looks like the multicast in hamperring the compress. If I remove the multicast the String is actually compressed when written to a file. -- View this message in context: http://camel.465427.n5.nabble.com/Using-Zip-dataformat-tp5103911p5105405.html Sent from the Camel - Users mailing list archive at Nabble.com.
Using Zip dataformat
Hi, I am trying to use Zip / GZip data format to compress the output files using Spring DSL implementation. But looks like the files are not compressed. Not sure what mistake I am making. Appreciate your help. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Using-Zip-dataformat-tp5103911p5103911.html Sent from the Camel - Users mailing list archive at Nabble.com.
TypeConverter for StringBuilder
Hi All, I guess there is not TypeConverter available in Camel to convert StringBuilder to InputStream. As most of the Aggregation would involve StringBuilder, It would be nice to have this TypConverter also available. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/TypeConverter-for-StringBuilder-tp5092438p5092438.html Sent from the Camel - Users mailing list archive at Nabble.com.
Extracting Header values
Hi All, I am looking to compare the CamelBatchSize available in the Exchange header to an integer using Spring DSL. I tried various options but not getting it right. 1. ${header.batchSize}==2000 2. ${in.header.batchSize}==2000 I tried loging the above values but they were empty. processing PARSED messeges [ ] Please let me know if there is a place where I could get all the Header values that I could use in spring DSL. Will use Simple give me the right comparison. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Extracting-Header-values-tp5092102p5092102.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Intercept issue - need help
Thanks a lot William. I resolved this by just passing the Headers to the bean method. This made sure the original Exchange is not changed. public TracerEntity traceExchange(@Headers Map headers) throws Exception { -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5092088.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Passing Header / Properties to Bean
Thanks a lot for the quick reply. Is there a similar annotation to pass Exchange Properties. -- View this message in context: http://camel.465427.n5.nabble.com/Passing-Header-Properties-to-Bean-tp5089208p5089239.html Sent from the Camel - Users mailing list archive at Nabble.com.
Passing Header / Properties to Bean
Hi All, I am using wireTap to log the header / properties details. I also want to make sure I do not copy the whole body of the message as that would cause memory related issues when dealing with large number of huge input files. I am trying to understand the behind the scene work of Camel. I tried the following (Which did give me the headers). Will the below implementation still copy the whole exchange or just the Headers. Also Please let me know how I could pass the Exchange Properties in a similar way. public void testIntercept(@Headers Map headers){ log.info("Reached Intercept: "+headers); } Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Passing-Header-Properties-to-Bean-tp5089208p5089208.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Intercept issue - need help
Hi Raul / William, Thanks for your help. Actually sending the intercept to seda queue also did not do the trick. It still messes up the original exchange. I tried creating a new ProducerTemplate, but that also did not work. If I just do a Stream:out (DSL) or System.out.print in (Java DSL) the original Exchange does not change. I tried doing the following. 1. Send the intercept to Seda Queue 2. Call the bean out from the above queue 3. Create an POJO bean from the information got from Exchange Headers 4. Create a ProducerTemplate out of the Exchange Context 5. Send the bean on this Producer to another seda Queue 6. Use JPA to persist the bean. But doing all this still changes the Original Exchange (Code snippets below). public void traceExchange(Exchange exchange) throws Exception { CamelContext context = new DefaultCamelContext(); ProducerTemplate prod = context.createProducerTemplate(); prod.sendBody("seda:trace", entity); } The only way I could do stuff without changing the original Exchange was by using wireTap. But the pitfall with this approach is that we duplicate the Exchange which would lead to a memory issue. Note: It was a silly mistaske which kept the file from being moved. I'd fixed it. -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5088854.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Intercept issue - need help
Looks like the JPA connection does not close after persisting the data in the database. Not sure how to resolve this. Appreciate any help. -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5086885.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Intercept issue - need help
Hi Raul, Sorry for asking again. For some reason the processed file is not being moved after being processed (i.e the route just hangs after printing the log "Persisting to Trace to Database"). The interesting this is that the Trace data is created in the database table and the file is actually processed completly. Is there any statement that I need to include after calling the JPA. Greatly appreciate your help. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5086304.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Intercept issue - need help
Thanks a lot Raul. It did the trick. Happy weekend to you. -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5081424.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Intercept issue - need help
Awesome. This is clear now, but the sad part is am not able isolate the interception Processor with the route processor. As I want to send an Entity to the JPA using a Processor, I need to set it as the Exchange Body which again would mess up the other route. >From [1] I guess that setting the bean's return type as void would not mess the Exchange. -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5081145.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Intercept issue - need help
I am assuming that unless I set the body of the exchange to what is processed by the bean called within the interceptor, the exchange should not change. Is this not true? All my bean does is return and object. But it's affecting the exchange. / public MarsTracerEntity traceExchange(Exchange exchange) throws Exception { MarsTracerEntity entity = new MarsTracerEntity(); if(null != exchange.getIn().getHeader("CamelFileNameOnly")) entity.setInputFileName((String)exchange.getIn().getHeader("CamelFileNameOnly")); if(null != exchange.getIn().getHeader("CamelBatchSize")) entity.setBatchSize((Integer)exchange.getIn().getHeader("CamelBatchSize")); if(null != exchange.getIn().getHeader("breadcrumbId")) entity.setOutputFileName((String)exchange.getIn().getHeader("breadcrumbId")); log.info("InputFilename : "+entity.getInputFileName()); log.info("outputFilename : "+entity.getOutputFileName()); log.info("Batch : "+entity.getBatchSize()); return entity; }/ -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5081107.html Sent from the Camel - Users mailing list archive at Nabble.com.
Intercept issue - need help
I have a strange issue that am not able to solve. Appreciate your help. I am trying to intercept and persist certain Header data to the Database (Please see the below Route config). I see the below log in my logger, which am not able to understand why. I am not setting the "com.vzw.fp.mars.entities.MarsTracerEntity" to the body of any exchange object, but still the log statement '' displays the below log saying that the parsed data is the JPA Entity I am using to persist Route headers to Database. *LOG:* 2011-12-16 11:38:01,341 INFO [Camel (camel-1) thread #1 - file://C:\camelProject\data\inbox\mars] route2- PARSED data is com.vzw.fp.mars.entities.MarsTracerEntity@d91987 ${in.header.batchSize}==2000 -- View this message in context: http://camel.465427.n5.nabble.com/Intercept-issue-need-help-tp5080816p5080816.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: convert Exchange Headers to JPA Entity
Not sure if only one processor in allowed withing a Camel context. Having the below intercept code which would call a different Processor "marsTraceProcessor" to create a JPA Entity is actually messing up the Processor within the actual route that processes the message. The out put of the below processor seems to be messed up with the JPA Entity created in the intercept Processor. *Intecept Processor code:* public class MarsTraceProcessor implements Processor { private static Logger log = Logger.getLogger(MarsTraceProcessor.class); @Override public void process(Exchange exchange) throws Exception { MarsTracerEntity entity = new MarsTracerEntity(); if(null != exchange.getIn().getHeader("CamelFileNameOnly")) entity.setInputFileName((String)exchange.getIn().getHeader("CamelFileNameOnly")); if(null != exchange.getIn().getHeader("CamelBatchSize")) entity.setBatchSize((Integer)exchange.getIn().getHeader("CamelBatchSize")); if(null != exchange.getIn().getHeader("breadcrumbId")) entity.setOutputFileName((String)exchange.getIn().getHeader("breadcrumbId")); log.info("InputFilename : "+entity.getInputFileName()); log.info("outputFilename : "+entity.getOutputFileName()); log.info("Batch : "+entity.getBatchSize()); exchange.getIn().setBody(entity); } } *Processor within Route:* public class MarsProcessor implements Processor { private static Logger log = Logger.getLogger(MarsProcessor.class); @Override public void process(Exchange exchange) throws Exception { MarsParser jniParser = new MarsParser(); List marsData = jniParser.parseMarsRecords(exchange); ProducerTemplate prod = exchange.getContext().createProducerTemplate(); for(ParsedDataBean data: marsData){ log.info("File name : "+data.getFileName()); exchange.getIn().setBody(data.getData()); exchange.getIn().setHeader(Exchange.FILE_NAME,data.getFileName()); exchange.getIn().setHeader(Exchange.BATCH_SIZE, data.getSize()); //log.info("File name : "+exchange.getIn().getBody()); prod.send("seda:marsDataProcessingQueue", exchange); } } } -- View this message in context: http://camel.465427.n5.nabble.com/convert-Exchange-Headers-to-JPA-Entity-tp5077846p5080368.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: convert Exchange Headers to JPA Entity
I am having trouble with fitting in the jpa entity. Not able to find the correct syntax to do it. I tried the below, but no data went into the database. The println's do print out the entity data. @Override public void process(Exchange exchange) throws Exception { MarsTracerEntity entity = new MarsTracerEntity(); if(null != exchange.getIn().getHeader("CamelFileNameOnly")) entity.setInputFileName((String)exchange.getIn().getHeader("CamelFileNameOnly")); if(null != exchange.getIn().getHeader("CamelBatchSize")) entity.setBatchSize((Integer)exchange.getIn().getHeader("CamelBatchSize")); if(null != exchange.getIn().getHeader("breadcrumbId")) entity.setOutputFileName((String)exchange.getIn().getHeader("breadcrumbId")); System.out.println("InputFilename : "+entity.getInputFileName()); System.out.println("outputFilename : "+entity.getOutputFileName()); System.out.println("Batch : "+entity.getBatchSize()); exchange.getIn().setBody(entity); JpaEndpoint endpoint = (JpaEndpoint) exchange.getContext() .getEndpoint("jpa://org.apache.camel.processor.interceptor.JpaTraceEventMessage?persistenceUnit=tracer"); JpaTemplate jpaTemplate = endpoint.getTemplate(); jpaTemplate.persist(entity); } Entity def: @Entity @Table(name="MarsTrace") public class MarsTracerEntity implements Serializable{ The intercept config in the Camel Context. -- View this message in context: http://camel.465427.n5.nabble.com/convert-Exchange-Headers-to-JPA-Entity-tp5077846p5078708.html Sent from the Camel - Users mailing list archive at Nabble.com.
using header.CamelFileName in file
Hi All, For some reason the expression "${header.CamelFileName}" is not getting recoganised. I am tring to set my own file name to the output file. Please can you ponit out if am missing something here. ProducerTemplate prod = exchange.getContext().createProducerTemplate(); for(ParsedDataBean data: tdrData){ //log.info("The size of processed data: "+data.getFileName()); exchange.getIn().setBody(data.getData()); String fileName = data.getFileName(); exchange.getIn().setHeader(Exchange.FILE_NAME,fileName); if(data.getSize() < batchSize){ log.info("The Batch of "+batchSize+" records is NOT complete"); exchange.getIn().setHeader(Exchange.BATCH_COMPLETE, false); prod.sendBody("seda:tempQueue",exchange.getIn().getBody()); } else { log.info("The Batch of "+batchSize+" records is complete"); exchange.getIn().setHeader(Exchange.BATCH_COMPLETE, true); prod.sendBody("file:C:\\camelProject\\data\\outbox\\tdr?fileName=${header.CamelFileName}.xml", exchange.getIn().getBody()); } } Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/using-header-CamelFileName-in-file-tp5052452p5052452.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Passing SEDA instance to Bean
Thanks a lot. I was thinking of sending a SEDA queue instance to JNI which would put messeges on to it. I'll have another route in Camel which would consume out of this SEDA queue. Not sure if this is possible or a total bad idea. Also I was thinking that instead of creating the SEDA queue instance from Camel what if C++ creates a SEDA queue in the same JVM and publishes to it, Camel Route should be able to consume messeges out of this SEDA queue. Is this correct. Looking to try it out today. -- View this message in context: http://camel.465427.n5.nabble.com/Passing-SEDA-instance-to-Bean-tp5032985p5035461.html Sent from the Camel - Users mailing list archive at Nabble.com.
Passing SEDA instance to Bean
Hi All, I am looking to do the following. 1. pass a seda component from CamelContext to a plain java bean 2. The java bean would pass this Queue component to JNI component. Is there a way to do this. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Passing-SEDA-instance-to-Bean-tp5032985p5032985.html Sent from the Camel - Users mailing list archive at Nabble.com.
Sending to MQ - Socket Connection
Hi, I am using the below code to read a file and send the processed contents to MQ. What I see is that (Using Yourkit profiler) for every message to MQ, a socket connection is being made. I suppose that only 1 socket connection should be made. How do I achive this. Appreciate your help. this.getContext().addComponent("jms", JmsComponent.jmsComponentAutoAcknowledge(comp.getConfiguration().getConnectionFactory())); from("file:/work_dir/camel_proj/data/input/mars") .log("Starting to process big file: ${header.CamelFileName}") .bean(Parser_AR.class,"parseData") .to("jms:queue:test_request_1").marshal().zip() .to("file:/work_dir/camel_proj/data/output/mars?fileName=${file:onlyname.noext}").end(); I am starting the Route as a standalone using org.apache.camel.Main Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Sending-to-MQ-Socket-Connection-tp4995540p4995540.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Type Converters Load
I am finding that just reading the file of 2k records just takes 1 millisecond. Code: from("file:C:\\camelProject\\data\\inbox\\mars") .log("Starting to process big file: ${header.CamelFileName}") .bean(MarsParser.class,"parseMarsData") .to("file:C:\\camelProject\\data\\outbox\\mars?fileName=${file:onlyname.noext}"); But, Reading the file and spliting each line using "\n" token takes and streaming them to an aggregator or a bean takes more than 550 milliseconds. from("file:C:\\camelProject\\data\\inbox\\mars") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming() .unmarshal(csv).convertBodyTo(List.class) .bean(MarsParser1.class,"create_fp_record1") .aggregate(constant(true), new MyAggregationStrategy()).completionSize(2000) .convertBodyTo(String.class) .to("file:C:\\camelProject\\data\\outbox\\mars?fileName=${file:onlyname.noext}") -- View this message in context: http://camel.465427.n5.nabble.com/Type-Converters-Load-tp4982080p4991853.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Type Converters Load
Thanks a lot. This was what I was looking for. But the performance results were not what I was looking. To read a file, Camel is taking lot of time. I got the below numbers from using Yourkit profiler. File of 10k records : 635 milliseconds File of 20k records : 557 milliseconds File of 40k records : 718 milliseconds File of 80k records : 1408 milliseconds File of 2k records : 34 milliseconds The code am using is from("file:/work_dir/camel_proj/data/input/mars") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming() Is there a way to improve this. We are looking to use camel to read huge number of files with varous amount of data each day. Going by the above times, it's really far behind from what we are looking. We are looking at reading file with 2k records in 1 millisecond. Please advise. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Type-Converters-Load-tp4982080p4991238.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Type Converters Load
Also, I tested the time taken by Camle route to read a file containing 10,000 lines of data records using YourKit profiler. It's takin more than 100 milliseconds in average. Is this the bottom line for a file reader in Camel? Thanks & regards, Ebe http://camel.465427.n5.nabble.com/file/n4985239/YourKit_Times1.bmp -- View this message in context: http://camel.465427.n5.nabble.com/Type-Converters-Load-tp4982080p4985239.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Type Converters Load
I am using a Timer. Sorry as a begnier, please let me know what would be the best way to start the route and it should keep running all day long. This is to read and process any file that comes into the directory. private final Timer timer = new Timer(); public void start() { timer.schedule(new TimerTask(){ public void run() { try{ JmsComponent comp = (JmsComponent)sprContext.getBean("wmq"); context.addComponent("jms", JmsComponent.jmsComponentAutoAcknowledge(comp.getConfiguration().getConnectionFactory())); //context.addRoutes(new SedaRouteBuilder(routingProps)); context.addRoutes(new CamelTimeAnalyzer()); //context.addRoutes(new AggregateAndProcess()); context.start(); }catch(Exception ex){ log.error("Error while parsing", ex); timer.cancel(); } } }, 2000); } public static void main(String[] args) { StartCamelTimeAnalyser startCamelTimeAnalyser = new StartCamelTimeAnalyser(); startCamelTimeAnalyser.start(); } -- View this message in context: http://camel.465427.n5.nabble.com/Type-Converters-Load-tp4982080p4984258.html Sent from the Camel - Users mailing list archive at Nabble.com.
Type Converters Load
Hi, I have the below type converters in my util. 1. Convert | de-limited String to List of tokens 2. Convert String builder to String. CsvDataFormat csv = new CsvDataFormat(); CSVStrategy strategy = CSVStrategy.DEFAULT_STRATEGY; strategy.setDelimiter('|'); csv.setStrategy(strategy); from("file:/camel_proj/data/input/mars") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming() .unmarshal(csv).convertBodyTo(List.class) .aggregate(constant(true), new MyListAggregation()).completionSize(2000) .executorService(threadPool1) .bean(Parser1.class,"parseList") // Returns a StringBuilder object .convertBodyTo(String.class) .to("file:/camel_proj/data/output/mars?fileName=${file:onlyname.noext}"); Issue: I see in the logs that the Camel TypeConverters are being Loaded for each conversion. I would guess that the Converters should be loaded only for the first call and subsequent conversions should use simply use them. Is this intentional? Is there a way to avoid re-loading again? Log: 2011-11-10 14:00:10,739 INFO [pool-2-thread-1] converter.AnnotationTypeConverterLoader - Found 3 packages with 15 @Converter classes to load 2011-11-10 14:00:10,742 INFO [pool-2-thread-1] converter.DefaultTypeConverter - Loaded 154 core type converters (total 154 type converters) 2011-11-10 14:00:10,743 INFO [pool-2-thread-1] converter.AnnotationTypeConverterLoader - Loaded 2 @Converter classes 2011-11-10 14:00:10,743 INFO [pool-2-thread-1] converter.DefaultTypeConverter - Loaded additional 5 type converters (total 159 type converters) in 0.001 seconds 2011-11-10 14:00:10,870 INFO [pool-2-thread-1] converter.DefaultTypeConverter - Loaded 154 core type converters (total 154 type converters) 2011-11-10 14:00:10,871 INFO [pool-2-thread-1] converter.AnnotationTypeConverterLoader - Loaded 2 @Converter classes 2011-11-10 14:00:10,871 INFO [pool-2-thread-1] converter.DefaultTypeConverter - Loaded additional 5 type converters (total 159 type converters) in 0.001 seconds 2011-11-10 14:00:10,998 INFO [pool-2-thread-1] converter.DefaultTypeConverter - Loaded 154 core type converters (total 154 type converters) 2011-11-10 14:00:10,999 INFO [pool-2-thread-1] converter.AnnotationTypeConverterLoader - Loaded 2 @Converter classes 2011-11-10 14:00:10,999 INFO [pool-2-thread-1] converter.DefaultTypeConverter - Loaded additional 5 type converters (total 159 type converters) in 0.001 seconds Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Type-Converters-Load-tp4982080p4982080.html Sent from the Camel - Users mailing list archive at Nabble.com.
RE: Performance issue
Thanks a lot. Yes it is the granularity. I also saw the below in the logs which I guess decreases the performance. What I am doing is 1. reading a file 2. Spliting them using token "\n" 3. Unmarshaling each line using the "|" delimiter (.unmarshal(csv).convertBodyTo(List.class).aggregate(constant(true), new MyListAggregation()).completionSize(2000) 4. Processing the list of tokens 6. writing them to a file. I see that the below lines in my logs sugessting that the same thing (loading type converters) is being repeated. 2011-11-09 14:58:34,116 INFO [pool-1-thread-1] impl.DefaultCamelContext - JMX enabled. Using ManagedManagementStrategy. 2011-11-09 14:58:34,117 INFO [pool-1-thread-1] converter.AnnotationTypeConverterLoader - Found 3 packages with 15 @Converter classes to load 2011-11-09 14:58:34,125 INFO [pool-1-thread-1] converter.DefaultTypeConverter - Loaded 154 core type converters (total 154 type converters) 2011-11-09 14:58:34,126 INFO [pool-1-thread-1] converter.AnnotationTypeConverterLoader - Loaded 2 @Converter classes 2011-11-09 14:58:34,126 INFO [pool-1-thread-1] converter.DefaultTypeConverter - Loaded additional 5 type converters (total 159 type converters) in 0.001 seconds 2011-11-09 14:58:35,472 INFO [pool-1-thread-1] impl.DefaultCamelContext - JMX enabled. Using ManagedManagementStrategy. 2011-11-09 14:58:35,472 INFO [pool-1-thread-1] converter.AnnotationTypeConverterLoader - Found 3 packages with 15 @Converter classes to load 2011-11-09 14:58:35,480 INFO [pool-1-thread-1] converter.DefaultTypeConverter - Loaded 154 core type converters (total 154 type converters) 2011-11-09 14:58:35,481 INFO [pool-1-thread-1] converter.AnnotationTypeConverterLoader - Loaded 2 @Converter classes 2011-11-09 14:58:35,482 INFO [pool-1-thread-1] converter.DefaultTypeConverter - Loaded additional 5 type converters (total 159 type converters) in 0.001 Why is this and is there a way to load them only once. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Performance-issue-tp4964485p4978941.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: correlationExpression while Aggregating
Thanks a lot. Below is the code am using and I get this error message. Not sure what is missing. from("file:C:\\camelProject\\data\\inbox\\") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming() .unmarshal(csv).convertBodyTo(List.class) .bean(Parser1.class,"create_fp_record1") .aggregate(constant(true), new MyAggregationStrategy()).completionSize(2000) .to("file:C:\\camelProject\\data\\outbox\\?fileExist=Append&fileName=${file:onlyname.noext}"); public String create_fp_record1(List marsRecord) { log.debug("START to parse record: "); StringBuilder str = new StringBuilder(300); str.append("L2|"); str.append(marsRecord.get(1)); str.append(delimeter); . . . . . str.append(delimeter); str.append("xx"); log.debug("END parse record"); return str; I see these logs as well as the Error log below. Not sure how these become null. 2011-11-08 13:42:35,827 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] bean.BeanProcessor - Setting bean invocation result on the IN message: L2|204043110533530 2011-11-08 13:42:35,827 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.AnnotationTypeConverterLoader - Found 3 packages with 15 @Converter classes to load 2011-11-08 13:42:35,827 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.DefaultTypeConverter - Loaded 154 core type converters (total 154 type converters) 2011-11-08 13:42:35,827 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.DefaultTypeConverter - Loading additional type converters ... 2011-11-08 13:42:35,827 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.AnnotationTypeConverterLoader - Loading file META-INF/services/org/apache/camel/TypeConverter to retrieve list of packages, from url: jar:file:/C:/Exec/apache-camel/lib/camel-spring-ws-2.8.1.jar!/META-INF/services/org/apache/camel/TypeConverter 2011-11-08 13:42:35,827 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.AnnotationTypeConverterLoader - Loading file META-INF/services/org/apache/camel/TypeConverter to retrieve list of packages, from url: jar:file:/C:/Exec/apache-camel/lib/camel-spring-integration-2.8.1.jar!/META-INF/services/org/apache/camel/TypeConverter 2011-11-08 13:42:35,842 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.AnnotationTypeConverterLoader - Loading file META-INF/services/org/apache/camel/TypeConverter to retrieve list of packages, from url: jar:file:/C:/Exec/apache-camel/lib/camel-core-2.8.1.jar!/META-INF/services/org/apache/camel/TypeConverter 2011-11-08 13:42:35,842 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.AnnotationTypeConverterLoader - Loading file META-INF/services/org/apache/camel/TypeConverter to retrieve list of packages, from url: file:/C:/MyWorkspaces/camelTest/src/resources/META-INF/services/org/apache/camel/TypeConverter 2011-11-08 13:42:35,842 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.AnnotationTypeConverterLoader - Loaded 2 @Converter classes 2011-11-08 13:42:35,842 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] impl.DefaultPackageScanClassResolver - Searching for annotations of org.apache.camel.Converter in packages: [com.verizon.learn.camel.utils] 2011-11-08 13:42:35,842 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] impl.DefaultPackageScanClassResolver - Found: [class com.verizon.learn.camel.utils.NamesConverter] 2011-11-08 13:42:35,842 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.AnnotationTypeConverterLoader - Found 1 packages with 1 @Converter classes to load 2011-11-08 13:42:35,842 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.DefaultTypeConverter - Loading additional type converters done 2011-11-08 13:42:35,842 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] converter.DefaultTypeConverter - Loaded additional 6 type converters (total 160 type converters) in 0.015 seconds 2011-11-08 13:42:35,842 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] aggregate.AggregateProcessor - Aggregation complete for correlation key true sending aggregated exchange: Exchange[Message: null -- View this message in context: http://camel.465427.n5.nabble.com/correlationExpression-while-Aggregating-tp4975191p4975364.html Sent from the Camel - Users mailing list archive at Nabble.com.
correlationExpression while Aggregating
Hi All, Appreciate your help. I have a bean that returns a String or a StringBuilder. I want to aggregate the first 2000 String's returned by this method into a single file. I am having trouble with finding the right correlationExpression to aggregate them. Please can you help me fix this. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/correlationExpression-while-Aggregating-tp4975191p4975191.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Performance issue
Looks like this happens in a random sequence. I passed in 5 files and below are the logs. I guess it's to do with some cpu action like garbage collection. Is there a way to make it consistant except for the first read? 2011-11-07 08:45:53,553 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] route1- Starting to process big file: MARS_TAP.CDNLDLTIWB0800010_2k.dat 2011-11-07 08:45:53,569 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] camel.MarsParser - Time of calling the parseMarsData method 2011-11-07 08:45:53,631 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] route1- Starting to process big file: MARS_TAP.CDNLDLTIWB0800010_2k1.dat 2011-11-07 08:45:53,631 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] camel.MarsParser - Time of calling the parseMarsData method 2011-11-07 08:45:53,678 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] route1- Starting to process big file: MARS_TAP.CDNLDLTIWB0800010_2k2.dat 2011-11-07 08:45:53,678 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] camel.MarsParser - Time of calling the parseMarsData method 2011-11-07 08:45:53,725 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] route1- Starting to process big file: MARS_TAP.CDNLDLTIWB0800010_2k3.dat 2011-11-07 08:45:53,725 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] camel.MarsParser - Time of calling the parseMarsData method 2011-11-07 08:45:53,756 INFO [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] route1- Starting to process big file: MARS_TAP.CDNLDLTIWB0800010_2k4.dat 2011-11-07 08:45:53,772 DEBUG [Camel (camel-1) thread #0 - file://C:camelProjectdatainboxmars] camel.MarsParser - Time of calling the parseMarsData method -- View this message in context: http://camel.465427.n5.nabble.com/Performance-issue-tp4964485p4971170.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Performance issue
I also tried the below. For some reason tie time taken from printing the "Starting to process big file" to calling the bean method "parseMarsData" takes an average of 15 milli seconds. The input file has 2000 lines of data. I am just passing them as a string to the method "parseMarsData" which does some pasrsing on these data. Please advise if there is a better way to do this so it increases the performance. from("file:C:\\camelProject\\data\\inbox\\mars") .log("Starting to process big file: ${header.CamelFileName}") .bean(MarsParser.class,"parseMarsData") .to("file:C:\\camelProject\\data\\outbox\\mars?fileName=${file:onlyname.noext}"); Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Performance-issue-tp4964485p4965077.html Sent from the Camel - Users mailing list archive at Nabble.com.
Performance issue
Hi, I am converting a file of 2000 records to String using the below camel api. String input = exchange.getIn().getBody(String.class); But I see that it's taking an average of 15 milliseconds. Is there anyway to impreove this. I am looking for times around 1 millisecond. Appriciate your help. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Performance-issue-tp4964485p4964485.html Sent from the Camel - Users mailing list archive at Nabble.com.
RE: synchronization issue
Sorry all. Please ignore the above thread. It was an issue with the way I had created the myList object in the MyListAggregation (AggregationStrategy) class. I had it as a global variable. The below works... public Exchange aggregate(Exchange oldExch, Exchange newExch) { try{ List myList = null; if (oldExch == null) { oldExch = new DefaultExchange(new DefaultCamelContext()); myList = new ArrayList(); oldExch.getIn().setBody(myList); } myList = oldExch.getIn().getBody(List.class); myList.add(newExch.getIn().getBody(String.class)); oldExch.getIn().setBody(myList); }catch(Exception ex){ logger.error("Error while aggregating",ex); } return oldExch; } -- View this message in context: http://camel.465427.n5.nabble.com/synchronization-issue-tp4956228p4958346.html Sent from the Camel - Users mailing list archive at Nabble.com.
RE: synchronization issue
The aggregation I am using is public Exchange aggregate(Exchange oldExch, Exchange newExch) { try{ if (oldExch == null) { oldExch = new DefaultExchange(new DefaultCamelContext()); oldExch.getIn().setBody(myList); } myList = oldExch.getIn().getBody(List.class); myList.add(newExch.getIn().getBody(String.class)); oldExch.getIn().setBody(myList); }catch(Exception ex){ logger.error("Error while aggregating",ex); } return oldExch; } -- View this message in context: http://camel.465427.n5.nabble.com/synchronization-issue-tp4956228p4958327.html Sent from the Camel - Users mailing list archive at Nabble.com.
RE: synchronization issue
The below logs show the various size's of the list. Looks like the below line of code has something wrong in it. Please help solve this. *from("file:/work_dir/camel_proj/data/input?delete=true") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming() .aggregate(header("CamelFileName"), new MyListAggregation()).completionSize(2000).onCompletion()* fileToQueue.log.1:2011-11-02 09:51:40,723 INFO [pool-2-thread-1] camel.ListParser - Size of MyList : 23011 fileToQueue.log.2:2011-11-02 09:51:39,103 INFO [pool-2-thread-10] camel.ListParser - Size of MyList : 21070 fileToQueue.log.3:2011-11-02 09:51:37,820 INFO [pool-2-thread-9] camel.ListParser - Size of MyList : 19094 fileToQueue.log.4:2011-11-02 09:51:36,881 INFO [pool-2-thread-8] camel.ListParser - Size of MyList : 17085 fileToQueue.log.5:2011-11-02 09:51:36,003 INFO [pool-2-thread-7] camel.ListParser - Size of MyList : 17011 fileToQueue.log.6:2011-11-02 09:51:35,018 INFO [pool-2-thread-6] camel.ListParser - Size of MyList : 13074 fileToQueue.log.7:2011-11-02 09:51:33,583 INFO [pool-2-thread-4] camel.ListParser - Size of MyList : 9052 fileToQueue.log.7:2011-11-02 09:51:34,215 INFO [pool-2-thread-5] camel.ListParser - Size of MyList : 11064 fileToQueue.log.8:2011-11-02 09:51:31,116 INFO [pool-2-thread-1] camel.ListParser - Size of MyList : 2017 fileToQueue.log.8:2011-11-02 09:51:31,615 INFO [pool-2-thread-2] camel.ListParser - Size of MyList : 4010 fileToQueue.log.8:2011-11-02 09:51:32,887 INFO [pool-2-thread-3] camel.ListParser - Size of MyList : 7024 -- View this message in context: http://camel.465427.n5.nabble.com/synchronization-issue-tp4956228p4958123.html Sent from the Camel - Users mailing list archive at Nabble.com.
RE: synchronization issue
Thanks a lot. I am using the "completionSize" option in the Aggregation and i've set it to 2000. But in the log above you see that the size of the list that is passed to the bean "parseList" is little more than 2000. I am not sure if i have something wrong in the above route. Appreciate your help. -- View this message in context: http://camel.465427.n5.nabble.com/synchronization-issue-tp4956228p4958027.html Sent from the Camel - Users mailing list archive at Nabble.com.
synchronization issue
Hi, I have a wierd situation. The method is syncronized and the List is also Syncronized, but I get a ConcurrentModificationException exception. Appreciate your help. Thanks & regards, Ebe Camel Route: from("file:/vzwhome/c0sineb/work_dir/camel_proj/data/input?delete=true") //from("file:C:\\camelProject\\data\\inbox?delete=true") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming() .aggregate(header("CamelFileName"), new MyListAggregation()).completionSize(2000) .parallelProcessing() .executorService(threadPool) .bean(ListParser.class,"parseList") ListParser class: The line 16 in the below error log is in *Bold* public synchronized StringBuilder parseList(List myList) { List syncList = Collections.synchronizedList(myList); logger.info("Size of MyList : "+syncList.size()); StringBuilder result = new StringBuilder(); try{ *for(String in_str: syncList){* result.append("&").append(parseString(in_str)); } }catch(Exception ex){ logger.error("Error parsing List",ex); } return result; } For some reason I get the below error 2011-11-01 16:17:13,220 DEBUG [pool-2-thread-1] aggregate.AggregateProcessor - Processing aggregated exchange: [FAILED toString()] 2011-11-01 16:17:13,221 INFO [pool-2-thread-1] impl.DefaultCamelContext - JMX enabled. Using ManagedManagementStrategy. 2011-11-01 16:17:13,222 DEBUG [pool-2-thread-1] management.DefaultManagementAgent - Starting JMX agent on server: com.sun.jmx.mbeanserver.JmxMBeanServer@1a1c42f 2011-11-01 16:17:13,223 INFO [pool-2-thread-1] camel.VolteListParser - Size of MyList : 2024 2011-11-01 16:17:13,223 ERROR [pool-2-thread-1] camel.VolteListParser - Error parsing List java.util.ConcurrentModificationException at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372) at java.util.AbstractList$Itr.next(AbstractList.java:343) at com.verizon.learn.camel.VolteListParser.parseVolteList(ListParser.java:16) -- View this message in context: http://camel.465427.n5.nabble.com/synchronization-issue-tp4956228p4956228.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Spliter in Camel
Ah ok. I was trying to get some performance statistics and was using a single thread. Any tips on increasing performance when using Camel (I would use thread pool). My task would be to read a file (csv or xml), spilt them into single records, process these records, aggregate 2000 records into 1 and publish it to MQ. I currently see that it takes little less than a second to process a file of 2000 records. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Spliter-in-Camel-tp4940967p4955063.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Spliter in Camel
Thanks a lot all of you. As you may see the code in above in this thread, I am splitting based on the new line token and sending it to a bean. I noticed som gap in the processing times. I 5 messages beeing processed in a single millisecond and there is a gap of 15 milliseconds before the next 5 message's are sent to the bean .. this pattern repeats itself. Any reason for this time delay. I went through the Splitter and streaming and do not see any specific property i could set for continous streaming. 2011-10-31 13:42:43,874 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,874 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,*874* DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,*890* DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,890 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,890 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,890 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,890 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,890 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,890 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,890 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,*890* DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,*905* DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,905 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: 2011-10-31 13:42:43,905 DEBUG [pool-1-thread-1] bean.BeanProcessor - Setting bean invocation result on the IN message: -- View this message in context: http://camel.465427.n5.nabble.com/Spliter-in-Camel-tp4940967p4953111.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Spliter in Camel
Thanks a lot Christian. I just noticed that adding an aggregater slows down the process and it's huge. Without the aggregation, the time taken between parsing each line of data is in nanoseconds (very negligible), but as I add the aggregarion to it, the time taken by between parsing each line of data goes up to 30 milliseconds. Also if I do not have the below executer statement, the process terminates in the middle of iterating through the file. ".executorService(threadPool)" Not sure if I have something totally wrong. Please advise. from("file:C:\\camelProject\\data\\inbox?fileName=someFile.txt&delete=true") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming() .executorService(threadPool) .bean(MyParser.class,"parseString") .aggregate(header("CamelFileName"), new MyAggregationStrategy()).completionSize(2000) .to("jms:queue:test_request_3"); As suggested I am using StringBuilder in my aggregation statergy. StringBuilder builder = new StringBuilder(); @Override public Exchange aggregate(Exchange exchange1, Exchange exchange2) { if (exchange1 == null) { return exchange2; } builder.append(exchange1.getIn().getBody(String.class)).append(exchange2.getIn().getBody(String.class)); exchange1.getIn().setBody(builder.toString()); return exchange1; } -- View this message in context: http://camel.465427.n5.nabble.com/Spliter-in-Camel-tp4940967p4946971.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Spliter in Camel
Thanks a lot for your help. Have another memory related question. The scenario I have is that, I get a file with n number of records, I split them using \n and stream them to a bean. I guess there would be a single instance of the bean for each thread (I am using an Executer) in the memory. I then pass the output from the bean (Which is also a string) to an aggregater to concatinate them into buckets of 2000 using an AggregaterStratergy. The performance I see is very slow. Please let me know your thoughts on the below code. Appreciate your help. from("file:/input?fileName=someFile.txt&delete=true") .log("Starting to process big file: ${header.CamelFileName}") .split(body().tokenize("\n")).streaming().executorService(threadPool) .bean(MyParser.class,"parseString").aggregate(header("CamelFileName"), new MyAggregationStrategy()).completionSize(2000) .executorService(threadPool1).to("jms:queue:test_request_3").end() .log("Sent all messages to queue"); The AggregationStratery does the below concatination. exchange1.getIn().setBody(exchange1.getIn().getBody(String.class)+exchange2.getIn().getBody(String.class)); -- View this message in context: http://camel.465427.n5.nabble.com/Spliter-in-Camel-tp4940967p4942950.html Sent from the Camel - Users mailing list archive at Nabble.com.
Spliter in Camel
Hi, I have a file of 300,000 records and i use the split mechanism of Camel to split them and sends each record to a processer. Does Camel store these records on a heap or somewhere before it sends them to the processer. How does Camel splitter internally work. I want to make sure that the Splitter does not eat up the memory. Please provide some insight into this. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Spliter-in-Camel-tp4940967p4940967.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: camel 2.4 with spring 2.5.6 TaskExecutor issue
Thanks a lot. It works now. -- View this message in context: http://camel.465427.n5.nabble.com/camel-2-4-with-spring-2-5-6-TaskExecutor-issue-tp3237897p4933214.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: camel 2.4 with spring 2.5.6 TaskExecutor issue
Hi, Is this issue resolved. Please advise. I am using the following versions Camel: 2.8.1 Spring: 2.5.6 I am just trying to consume messeges on a queue write it on a SEDA. But I get the following error. context.addRoutes(new RouteBuilder() { public void configure() { from("jms:queue:test_request_3").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().getBody(); } }) .to("seda:test_output"); } }); Exceptions: Exception in thread "main" java.lang.NoSuchMethodError: org.springframework.jms.listener.DefaultMessageListenerContainer .setTaskExecutor(Ljava/util/concurrent/Executor;)V at org.apache.camel.component.jms.JmsEndpoint.configureListenerContainer(JmsEndpoint.java:190) at org.apache.camel.component.jms.JmsEndpoint.createConsumer(JmsEndpoint.java:214) at org.apache.camel.component.jms.JmsEndpoint.createConsumer(JmsEndpoint.java:158) at org.apache.camel.component.jms.JmsEndpoint.createConsumer(JmsEndpoint.java:68) at org.apache.camel.impl.EventDrivenConsumerRoute.addServices(EventDrivenConsumerRoute.java:61) at org.apache.camel.impl.DefaultRoute.onStartingServices(DefaultRoute.java:75) at org.apache.camel.impl.RouteService.warmUp(RouteService.java:124) at org.apache.camel.impl.DefaultCamelContext.doWarmUpRoutes(DefaultCamelContext.java:1843) at org.apache.camel.impl.DefaultCamelContext.safelyStartRouteServices(DefaultCamelContext.java:1771) at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRoutes(DefaultCamelContext.java:1556) at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:1448) at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:1338) at org.apache.camel.impl.ServiceSupport.start(ServiceSupport.java:67) at org.apache.camel.impl.ServiceSupport.start(ServiceSupport.java:54) at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:1316) at com.verizon.learn.camel.QueueToFile.createConsumer(QueueToFile.java:37) at com.verizon.learn.camel.QueueToFile.main(QueueToFile.java:19) Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/camel-2-4-with-spring-2-5-6-TaskExecutor-issue-tp3237897p4933096.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Marshal and Unmarshal files
Thanks a lot Williem. I just saw that we could have different expressions passed on the filename parameter, but they do not work. Example: context.addRoutes(new RouteBuilder() { public void configure() { from("file:C:\\camelProject\\data\\outbox").unmarshal().zip() .to("file:C:\\camelProject\\data\\inbox?filename=${file:onlyname.noext}&delete=true"); } }); Reference : http://camel.apache.org/file-language.html Error: org.apache.camel.FailedToCreateRouteException: Failed to create route route1 at: >>> To[file:C:\camelProject\data\inbox? filename=${file:onlyname.noext}&delete=true] <<< in route: Route[[From[file:C:\camelProject\data\outbox]] -> [Marshal[o. .. because of Failed to resolve endpoint: file://C:\camelProject\data\inbox?delete=true&filename=%24%7Bfile%3Aonlyname.n oext%7D due to: Failed to resolve endpoint: file://C:\camelProject\data\inbox?delete=true&filename=%24%7Bfile%3Aonlyname .noext%7D due to: There are 1 parameters that couldn't be set on the endpoint. Check the uri if the parameters are spelt correctly and that they are properties of the endpoint. Unknown parameters=[{filename=${file:onlyname.noext}}] Regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Marshal-and-Unmarshal-files-tp4917704p4917972.html Sent from the Camel - Users mailing list archive at Nabble.com.
Marshal and Unmarshal files
Hi All, I am trying to Marshal and Unmarshal using the zip format. When I marshal the files to a Zip format, the file name does not changed though I see it compressed in the output folder. I was able to change the name using the fileNameOption to *.zip. But this created a problem when I try to unmarhsal the file and if I used the fileName option as mentioned above, the unmarshaled file (Though uncompressed) has the file name as *.zip. Please help resolve this issue. I need to marshal a file and to it's zipformat witht he file name as (*.zip) and then unmarshal it to it's original fileName. Thanks & regards, Ebe -- View this message in context: http://camel.465427.n5.nabble.com/Marshal-and-Unmarshal-files-tp4917704p4917704.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Camel 2.3.0 - File Endpoint with delete=true and moveFailed doesn't move failed files
I am using the Camel Version 2.8.1 and looks like having delete and onException together does not work. I have a invalid XML (With lots of closing tags missing). The valid XML's are processed and sent to the queue, but the invalid XML is not processed, but all the files get deleted. I am new to Camel, so please let me know if I am missing something here. Code sample: try{ onException(Exception.class) .maximumRedeliveries(3) .handled(true) .to("file:C:\\camelProject\\data\\error"); //ExecutorService executor = Executors.newFixedThreadPool(5); from("file:C:\\camelProject\\data\\inbox?delete=true") .process(new Processor() { @Override public void process(Exchange exchange) throws Exception { processMessage(exchange); } }).to("seda:orders"); from("seda:orders") .choice() .when(header("CamelFileName").contains("incomingOrders")) .to("jms:queue:test_request_1", "file:C:\\camelProject\\data\\outbox") .when(header("CamelFileName").contains("outgoingOrders")) .to("jms:queue:test_request_2", "file:C:\\camelProject\\data\\outbox"); }catch(Exception ex){ ex.printStackTrace(); } -- View this message in context: http://camel.465427.n5.nabble.com/Camel-2-3-0-File-Endpoint-with-delete-true-and-moveFailed-doesn-t-move-failed-files-tp511964p4900256.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Move does not work with file polling
Appreciate your help. For some reason each os the polling consumers poll for few seconds and then terminate. context.addRoutes(new RouteBuilder() { public void configure() { from("quartz://myTimer?trigger.repeatInterval=2000&trigger.repeatCount=-1") .setBody().simple("I was fired at ${header.fireTime}") .to("jms:queue:test_request_1"); } }); I thought this would keep running, but the above route terminates after sending out 3 messages with 2 seconds interval. I want this to keep running until I kill the process. I know am missing something, but am not able to figure that out. -- View this message in context: http://camel.465427.n5.nabble.com/Move-does-not-work-with-file-polling-tp4876472p4889423.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Move does not work with file polling
Thanks a lot. My main concern is how to implement endless polling on a directory for files. Not quite understanding the behind work of Camel PlooingConsumer. I tried various ways but, the process terminates as soon as the current set of files have been read and processed. Appreciate your help. -- View this message in context: http://camel.465427.n5.nabble.com/Move-does-not-work-with-file-polling-tp4876472p4889193.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Move does not work with file polling
Am trying to play around with Camel and this is my first project using it. The problem am trying to resolve is, 1. Poll a particular directory for files (This should be running all the time). 2. Convert them into Messages and write them to a SEDA (In-memory queue) 3. Read from the above queue and write to a different queue and also convert the message into a file and write to a different location. I am looking for a way to pipeline all of the above. If you have an example or a reference handy, please pass it on -- View this message in context: http://camel.465427.n5.nabble.com/Move-does-not-work-with-file-polling-tp4876472p4888954.html Sent from the Camel - Users mailing list archive at Nabble.com.
Move does not work with file polling
Hi, I am having trouble moving a file to a back up location after it's consumed and it's contents sent to a queue. I am new to Camle world and exploring it. Below is the code am using. *Spring config:* file:C:\\camelProject\\data\\inbox?move=C:\\camelProject\\data\\inbox\\bkp file:C:\\camelProject\\data\\outbox\\bkp jms:queue:test_queue Code snippet: System.out.println("Starting Polloer: "+consumerEndpoint); PollingConsumer consumer = consumerEndpoint.createPollingConsumer(); Producer producer = targetEndpoint.createProducer(); consumer.start(); while(true){ Exchange consumerX = consumer.receive(timeout); while(consumerX != null){ System.out.println("Received message: " + consumerX.getIn().getBody()); Exchange newExchange = producer.createExchange(consumerX); newExchange.getIn().setBody(consumerX.getIn().getBody()); newExchange.getIn().setHeaders(consumerX.getIn().getHeaders()); producer.process(newExchange); consumerX = consumer.receive(timeout); } Thread.sleep(1000); } }catch(Exception ex){ ex.printStackTrace(); } -- View this message in context: http://camel.465427.n5.nabble.com/Move-does-not-work-with-file-polling-tp4876472p4876472.html Sent from the Camel - Users mailing list archive at Nabble.com.