Re: cxf bus
Hi, Because your two route are using same port , you need let the CXF endpoint share the same Bus to avoid shutting down the Jetty engine when you remove the route1. beans xmlns=http://www.springframework.org/schema/beans; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xmlns:cxf=http://camel.apache.org/schema/cxf; xmlns:cxfcore=http://cxf.apache.org/core; xmlns:http-conf=http://cxf.apache.org/transports/http/configuration; xsi:schemaLocation= http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd http://cxf.apache.org/transports/http/configuration http://cxf.apache.org/schemas/configuration/http-conf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd import resource=classpath:META-INF/cxf/cxf.xml / !-- configure the bus -- cxfcore:bus bus=“cxf1/ camel:camelContext id=camelContext camel:route id=route1 camel:from uri=cxf://http://0.0.0.0:5100/proxy1?dataFormat=MESSAGEamp;bus=#cxf1amp;wsdlURL=http://192.168.0.218:8500/ws?wsdl; / camel:to uri=cxf://http://192.168.0.218:8500/ws?dataFormat=MESSAGE; / /camel:route camel:route id=route2 camel:from uri=cxf://http://0.0.0.0:5100/proxy2?dataFormat=MESSAGEamp;bus=#cxf1amp;wsdlURL=http://192.168.0.218:8500/ws?wsdl; / camel:to uri=cxf://http://192.168.0.218:8500/ws?dataFormat=MESSAGE; / /camel:route /camel:camelContext -- Willem Jiang Red Hat, Inc. Web: http://www.redhat.com Blog: http://willemjiang.blogspot.com (http://willemjiang.blogspot.com/) (English) http://jnn.iteye.com (http://jnn.javaeye.com/) (Chinese) Twitter: willemjiang Weibo: 姜宁willem On Tuesday, November 5, 2013 at 11:03 AM, Ernest Lu wrote: Hi, *I'm using Camel 2.10.7* *My Spring configuration file like this as follow* ?xml version=1.0 encoding=UTF-8? beans xmlns=http://www.springframework.org/schema/beans; xmlns:jaxws=http://cxf.apache.org/jaxws; xmlns:camel=http://camel.apache.org/schema/spring; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xmlns:cxf=http://cxf.apache.org/core; xsi:schemaLocation=http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd; camel:camelContext id=camelContext camel:route id=route1 camel:from uri=cxf://http://0.0.0.0:5100/proxy1?dataFormat=MESSAGEamp;wsdlURL=http://192.168.0.218:8500/ws?wsdl; / camel:to uri=cxf://http://192.168.0.218:8500/ws?dataFormat=MESSAGE; / /camel:route camel:route id=route2 camel:from uri=cxf://http://0.0.0.0:5100/proxy2?dataFormat=MESSAGEamp;wsdlURL=http://192.168.0.218:8500/ws?wsdl; / camel:to uri=cxf://http://192.168.0.218:8500/ws?dataFormat=MESSAGE; / /camel:route /camel:camelContext bean id=webServiceTest class=WebServiceTestImpl property name=camelContext ref=camelContext / /bean jaxws:endpoint id=webServiceTestEndpoint implementor=#webServiceTest address=http://0.0.0.0:8100/service; / /beans *I remove the route1 by Web Service,but an exception is thrown as follow.* 2013-11-05 10:55:17 INFO org.apache.camel.impl.DefaultShutdownStrategy:165 -- Starting to graceful shutdown 1 routes (timeout 300 seconds) 2013-11-05 10:55:17 INFO org.eclipse.jetty.server.handler.ContextHandler:698 -- stopped o.e.j.s.h.ContextHandler{,null} 2013-11-05 10:55:17 INFO org.apache.camel.impl.DefaultShutdownStrategy:561 -- Route: route1 shutdown complete, was consuming from: Endpoint[cxf://http://0.0.0.0:5100/proxy1?dataFormat=MESSAGEwsdlURL=http%3A%2F%2F192.168.0.218%3A8500%2Fws%3Fwsdl] 2013-11-05 10:55:17 INFO org.apache.camel.impl.DefaultShutdownStrategy:210 -- Graceful shutdown of 1 routes completed in 0 seconds 2013-11-05 10:55:17 INFO org.apache.camel.spring.SpringCamelContext:1880 -- Route: route1 is stopped, was consuming from: Endpoint[cxf://http://0.0.0.0:5100/proxy1?dataFormat=MESSAGEwsdlURL=http%3A%2F%2F192.168.0.218%3A8500%2Fws%3Fwsdl] 2013-11-05 10:55:17 INFO org.apache.camel.component.cxf.CxfEndpoint:844 -- shutdown the bus ... org.apache.cxf.bus.spring.SpringBus@734d246 2013-11-05 10:55:17 INFO org.eclipse.jetty.server.handler.ContextHandler:698 -- stopped o.e.j.s.h.ContextHandler{,null} 2013-11-05 10:55:17 INFO org.eclipse.jetty.server.handler.ContextHandler:698 -- stopped o.e.j.s.h.ContextHandler{,null} 2013-11-05 10:55:17 WARN org.eclipse.jetty.util.component.AbstractLifeCycle:199 -- FAILED qtp1950076528{8=4=5/254,4}#FAILED: java.lang.InterruptedException: sleep interrupted java.lang.InterruptedException: sleep interrupted at
jibx dataformat does not support setting bindingname
The JiBX dataformat does not seem to allow setting the binding name on the marshaller/unmarshaller. Normally, you'd have the option of calling BindingDirectory.getFactory(bindingName, class), but this doesn't seem to be available in marshal().jibx(). Is there another way to do this? -- View this message in context: http://camel.465427.n5.nabble.com/jibx-dataformat-does-not-support-setting-bindingname-tp5742615.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: bug in DefaultStreamCachingStrategy
Then next release :) See CAMEL-6795 Its now default as private transient String spoolDirectoryName = ${java.io.tmpdir}/camel/camel-tmp-#uuid#; On Tue, Nov 5, 2013 at 1:27 AM, pmcneil pe...@mcneils.net wrote: Nope, this is in 1.12.1 :-) and yes I can configure it, but what if something else expects sans / ;-) Cheers, Peter. -- View this message in context: http://camel.465427.n5.nabble.com/bug-in-DefaultStreamCachingStrategy-tp5742591p5742620.html Sent from the Camel - Users mailing list archive at Nabble.com. -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: inProgressRepository Not clearing for items in idempotentRepository
Hi Yeah sounds like a bug. Feel free to log a JIRA ticket On Mon, Nov 4, 2013 at 11:05 PM, skelly kelly.se...@gmail.com wrote: I'm attempting to consume messages from an FTP server using an idempotent repository to ensure that I do not re-download a file unless it has been modified. Here is my (quite simple) camel configuration: beans:bean id=downloadRepo class=org.apache.camel.processor.idempotent.FileIdempotentRepository beans:property name=fileStore value=/tmp/.repo.txt/ beans:property name=cacheSize value=25000/ beans:property name=maxFileStoreSize value=100/ /beans:bean camelContext trace=true xmlns=http://camel.apache.org/schema/spring; endpoint id=myFtpEndpoint uri=ftp://me@localhost?password=binary=truerecursive=trueconsumer.delay=15000readLock=changedpassiveMode=truenoop=trueidempotentRepository=#downloadRepoidempotentKey=$simple{file:name}-$simple{file:modified}; / endpoint id=myFileEndpoint uri=file:///tmp/files/ route from uri=ref:myFtpEndpoint / to uri=ref:myFileEndpoint / /route When I start my application for the first time, all files are correctly downloaded from the FTP server and stored in the target directory, as well as recorded in the idempotent repo. When I restart my application, all files are correctly detected as being in the idempotent repo already on the first poll of the FTP server, and are not re-downloaded: 13-11-04 16:52:10,811 TRACE [Camel (camel-1) thread #0 - ftp://me@localhost] org.apache.camel.component.file.remote.FtpConsumer: FtpFile[name=test1.txt, dir=false, file=true] 2013-11-04 16:52:10,811 TRACE [Camel (camel-1) thread #0 - ftp://me@localhost] org.apache.camel.component.file.remote.FtpConsumer: This consumer is idempotent and the file has been consumed before. Will skip this file: RemoteFile[test1.txt] However, on all subsequent polls to the FTP server the idempotent check is short-circuited because the file is in-progress: 2013-11-04 16:53:10,886 TRACE [Camel (camel-1) thread #0 - ftp://me@localhost] org.apache.camel.component.file.remote.FtpConsumer: FtpFile[name=test1.txt, dir=false, file=true] 2013-11-04 16:53:10,886 TRACE [Camel (camel-1) thread #0 - ftp://me@localhost] org.apache.camel.component.file.remote.FtpConsumer: Skipping as file is already in progress: test1.txt I am using camel-ftp:2.11.1 (also observing same behavior with 2.12.1) When I inspect the source code I notice two interesting things. First, the GenericFileConsumer check that determines whether a file is already inProgress which is called from isValidFile() always adds the file to the inProgressRepository: protected boolean isInProgress(GenericFileT file) { String key = file.getAbsoluteFilePath(); return !endpoint.getInProgressRepository().add(key); } Second, if a file is determined to match an entry already present in the idempotent repository it is discarded (GenericFileConsumer.isValidFile() returns false). This means it is never published to an exchange, and thus never reaches the code which would remove it from the inProgressRepository. Since the inProgress check happens before the Idempotent Check, we will always short circuit after we get into the inprogress state, and the file will never actually be checked again. Am I reading this code correctly? Am I missing something here? This seems like a bug in the implementation of the isInProgress(GenericFileT file) method to me. -- View this message in context: http://camel.465427.n5.nabble.com/inProgressRepository-Not-clearing-for-items-in-idempotentRepository-tp5742613.html Sent from the Camel - Users mailing list archive at Nabble.com. -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: Problem with exception handler (onException) on Camel 2.12.X routes.
Thank you for your reply! I am afraid but it seems not working with even 2.13-SNAPSHOT. I've even changed the exception to java.lang.Exception and still is not catching these exceptions. The exception should be caught when there is no connection to the database. That's how it works on Camel 2.7.0 and java.sql.SQLException is caught by the handler nicely. route descriptionNTCS Oracle Insertion Queue/description from uri=seda:insertion.queue/ onException exceptionjava.lang.Exception/exception handled constanttrue/constant /handled process ref=telemetryFailureProcessor/ to uri=seda:insertion.failure.queue/ /onException transacted/ split xpath/ntcs-telemetry/telemetry/xpath bean ref=insertTransactedTelemetry/ /split /route 2013-11-05 09:50:06,840 | INFO | Apache Camel 2.13-SNAPSHOT (CamelContext: camel) started in 0.491 seconds | org.apache.camel.spring.SpringCamelContext | main 2013-11-05 09:50:35,237 | WARN | Transaction rollback (0x1c64f22) redelivered(false) for (MessageId: queue_ntcs.telemetry.in_ID_tcs-amq-dev-35729-1383645004167-5_1_1_1_1 on ExchangeId: ID-tcs-amq-dev-tng-iac-es-43055-1383645005720-0-2) caught: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection | org.apache.camel.spring.spi.TransactionErrorHandler | Camel (camel) thread #1 - seda://insertion.queue 2013-11-05 09:50:35,239 | WARN | Error processing exchange. Exchange[JmsMessage[JmsMessageID: ID:tcs-amq-dev.tng.iac.es-35729-1383645004167-5:1:1:1:1]]. Caused by: [org.springframework.transaction.CannotCreateTransactionException - Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection] | org.apache.camel.component.seda.SedaConsumer | Camel (camel) thread #1 - seda://insertion.queue org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:241) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:372) at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:128) at org.apache.camel.spring.spi.TransactionErrorHandler.doInTransactionTemplate(TransactionErrorHandler.java:174) at org.apache.camel.spring.spi.TransactionErrorHandler.processInTransaction(TransactionErrorHandler.java:134) at org.apache.camel.spring.spi.TransactionErrorHandler.process(TransactionErrorHandler.java:103) at org.apache.camel.spring.spi.TransactionErrorHandler.process(TransactionErrorHandler.java:112) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191) at org.apache.camel.component.seda.SedaConsumer.sendToConsumers(SedaConsumer.java:291) at org.apache.camel.component.seda.SedaConsumer.doRun(SedaConsumer.java:200) at org.apache.camel.component.seda.SedaConsumer.run(SedaConsumer.java:147) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: java.sql.SQLException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection at oracle.ucp.util.UCPErrorHandler.newSQLException(UCPErrorHandler.java:488) at oracle.ucp.util.UCPErrorHandler.throwSQLException(UCPErrorHandler.java:163) at oracle.ucp.jdbc.PoolDataSourceImpl.startPool(PoolDataSourceImpl.java:643) at
Re: cxf bus
hi, thank you very much. I use different port,but it still doesn't work. Finally I specify the bus option per CXF endpoint,This problem solved. Why don't specify this option, it wouldn't work. Changed Spring configuration file like this: ?xml version=1.0 encoding=UTF-8? beans xmlns=http://www.springframework.org/schema/beans; xmlns:jaxws=http://cxf.apache.org/jaxws; xmlns:camel=http://camel.apache.org/schema/spring; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xmlns:cxf=http://cxf.apache.org/core; xsi:schemaLocation=http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd; camel:camelContext id=camelContext camel:route id=route1 camel:from uri=cxf://http://0.0.0.0:5100/proxy1?dataFormat=MESSAGEamp;wsdlURL=http://192.168.0.218:8500/ws?wsdlamp;bus=#mybus; / camel:to uri=cxf://http://192.168.0.218:8500/ws?dataFormat=MESSAGEamp;bus=#mybus; / /camel:route camel:route id=route2 camel:from uri=cxf://http://0.0.0.0:5100/proxy2?dataFormat=MESSAGEamp;wsdlURL=http://192.168.0.218:8500/ws?wsdlamp;bus=#mybus; / camel:to uri=cxf://http://192.168.0.218:8500/ws?dataFormat=MESSAGEamp;bus=#mybus; / /camel:route /camel:camelContext bean id=webServiceTest class=WebServiceTestImpl property name=camelContext ref=camelContext / /bean jaxws:endpoint id=webServiceTestEndpoint implementor=#webServiceTest address=http://0.0.0.0:8100/service; bus=mybus / cxf:bus name=mybus id=mybus /cxf:bus /beans -- View this message in context: http://camel.465427.n5.nabble.com/cxf-bus-tp5742573p5742637.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Default value for a timer?
It will be fired according to the defaults for the timer component which is evry 1000 milisecond. See the documentation[1]. [1] http://camel.apache.org/timer.html // Pontus On Mon, Nov 4, 2013 at 7:07 PM, John D. Ament john.d.am...@gmail.comwrote: I'm using Camel 2.10.7. We just noticed in our code we have a timer with no values set. How often will this timer fire? John
camel exception - netty
Hi While using camel-netty we have noticed that when remote sever is down netty component is throwing org.apache.camel.CamelException. It should be throwing java.net.ConnectException right?? So that it will be easy for error handling. - Regards kiran Reddy -- View this message in context: http://camel.465427.n5.nabble.com/camel-exception-netty-tp5742647.html Sent from the Camel - Users mailing list archive at Nabble.com.
CXFRS Endpoint ignoring throwExceptionOnFailure property
Hi I am using the CXFRS Endpoint. I am using the following URI format: cxfrs://bean://rsCustodyClient?throwExceptionOnFailure=true My custody client bean looks like: cxf:rsClient id=rsCustodyClient address=http://localhost/service; serviceClass=com.mycompany.rs.client.CustodyTradeResource loggingFeatureEnabled=true / I am using camel 2.10.3 Any help much appreciated Thanks Joe -- View this message in context: http://camel.465427.n5.nabble.com/CXFRS-Endpoint-ignoring-throwExceptionOnFailure-property-tp5742648.html Sent from the Camel - Users mailing list archive at Nabble.com.
count of processed messages when using aggregation
Hello, is there an easy way to count and sum all processed lines by an aggregator? Suppose, my file has lines, the route split it into 100 lines chunks and send them to a remote system. The goal is, to gather statistics of all sent lines. In the example below, .log() would always print the aggregation size (100 or less) and not the number of overall processed messages, which should be @Override public void configure() throws Exception { from(file:input.csv) .unmarshal().csv().split(body()).streaming().parallelProcessing() .bean(myProcessor, doWork) .aggregate(constant(id), new Aggregator()).completionSize(100).completionTimeout(1000) .parallelProcessing() .to(remote) .log(Sent ${header.RecordsCounter} records to remote); } Thanks in advance! -- View this message in context: http://camel.465427.n5.nabble.com/count-of-processed-messages-when-using-aggregation-tp5742649.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Basic Apache-Camel LoadBalancer-Failover Example
After much effort, I have found a way to implement this basing myself on the loadbalancer example provided by apache. I have uploadded the eclipse project to my github account, you can check it working here: - https://github.com/Fl4m3Ph03n1x/stackoverflow/tree/master/loadbalancer-failover-springdsl-example Although my example does respect the overall intended architecture, it does have few differences as explained bellow: - It uses the Spring DSL instead of the Java DSL - MyApp-A is the loadbalancer. Every 10 it generates a report (instead of reading a file) and it sends it to MyApp-B. - MyApp-B corresponds to MINA server 1 on localhost:9991 - MyApp-C corresponds to MINA server 3 on localhost:9993 - MyApp-D corresponds to MINA server 2 on localhost:9992 - When MyApp-C receives the report, it sends it back to MyApp-A Furthermore, it is also not clear when, where or why MyApp-C replies to MyApp-A with the changed report. This behavior is not specified in the Spring DSL code and so far no one was able to explain to me why this is even happening. So two problems remain: 1. How would this be done using the Java DSL 2. Why is the MyApp-C replying to MyApp-A and how is it doing it? Also, this may sound silly but ... would someone consider adding my example to the documentation if I commented it well? -- View this message in context: http://camel.465427.n5.nabble.com/Basic-Apache-Camel-LoadBalancer-Failover-Example-tp5742551p5742650.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: count of processed messages when using aggregation
Hi I think your split size is actually , try use the header from the split, CamelSplitSize. Taariq On Tue, Nov 5, 2013 at 1:57 PM, Olaf omgolafg...@gmail.com wrote: Hello, is there an easy way to count and sum all processed lines by an aggregator? Suppose, my file has lines, the route split it into 100 lines chunks and send them to a remote system. The goal is, to gather statistics of all sent lines. In the example below, .log() would always print the aggregation size (100 or less) and not the number of overall processed messages, which should be @Override public void configure() throws Exception { from(file:input.csv) .unmarshal().csv().split(body()).streaming().parallelProcessing() .bean(myProcessor, doWork) .aggregate(constant(id), new Aggregator()).completionSize(100).completionTimeout(1000) .parallelProcessing() .to(remote) .log(Sent ${header.RecordsCounter} records to remote); } Thanks in advance! -- View this message in context: http://camel.465427.n5.nabble.com/count-of-processed-messages-when-using-aggregation-tp5742649.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Migrate Apache Camel Endpoints
Thanks! -- View this message in context: http://camel.465427.n5.nabble.com/Migrate-Apache-Camel-Endpoints-tp5741899p5742652.html Sent from the Camel - Users mailing list archive at Nabble.com.
Forcing file write to complete without stream caching
Hi, In my first attempt to use Camel I’ve run into a intra-route timing issue that I’ve only solved with a hack, so I was wondering whether there are any best practices of dealing with timing issues when dealing with multiple processing steps in a batch file pipeline. Basically I am trying to avoid doing an HTTP POST with an empty payload, since the route performing the HTTP POST is triggered before the file that it is wired to upload has been written. If I turn stream caching on, this problem goes away. However, since some files can be quite big, I’d prefer not to have do stream caching. So to solve the issue, I’ve written a workaround bean that just does a Thread.sleep() in order to wait for the upload file to actually get some data in it before firing off the HTTP POST. I’ve got a two step pipeline that: 1. Transcodes a batch input file into an intermediate format (using msgpack serialization); 2. Performs an HTTP POST of the intermediate format to a remote server; I’d like to keep the intermediate format around on disk for debugging and manual replay tasks. My camel context has two routes: route id=“transcode-to-msgpack from uri=file:/tmp/d/ log message=Transcoding ${file:name} to msgpack / to uri=bean:transcoder/ to uri=file:/tmp/b?fileName=${file:name.noext}.msgpack/ /route route id=“post-msgpack-payload from uri=file:/tmp/b/ from uri=file:/tmp/e/ log message=POSTing ${file:name} to the rating API / setProperty propertyName=url.template constanthttp://localhost:/calls/:source/:sequence/constant /setProperty process ref=“httpDataPump/ /route I have two custom beans doing the work: 1. transcoder - This takes an InputStream, and returns an InputStream that wraps and transcodes the InputStream from the file; 2. httpDataPump - This contains an HTTP client that uploads the FileInputStream the the InMessage from the Exchange refers. Doing a Thread.sleep() seems like a real hack to me, so I was wondering if there is a more idiomatic way to solve the issue. I’ve looked into the preMoveNamePrefix options, but they appear to apply only to input files. Ideally I’m looking for something that can move the output file after it has been written. Any pointers are appreciated. Cheers, Ben
Re: count of processed messages when using aggregation
Hello, thanks! CamelSplitSize is useful. I'd add then .to(remote) .choice() .when(property(CamelSplitComplete).isEqualTo(true)) .log(splitted ${property.CamelSplitSize} records) .otherwise() .log(lfile not completed yet); but, what if some messages fail or would be filtered and not sent to the remote system. How to sum them? -- View this message in context: http://camel.465427.n5.nabble.com/count-of-processed-messages-when-using-aggregation-tp5742649p5742655.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Camel http4 adds request headers to response headers by default
Thank you! That's what I am looking for! -- View this message in context: http://camel.465427.n5.nabble.com/Camel-http4-adds-request-headers-to-response-headers-by-default-tp5742600p5742656.html Sent from the Camel - Users mailing list archive at Nabble.com.
http4 component remove host header
Hi, We are using camel 2.12.0 http4 component to proxy http requests to a web application. We ran into an issue with the host header being removed for every proxied request due to this line of code in http4's HttpProducer.java exchange.getIn().getHeaders().remove(host); line 106 we found this post when this above line of code has been introduced. We would like to keep the host header in the http proxy situation. Is there a way around this other than modifying the existing http4 component source code? Thanks! -- View this message in context: http://camel.465427.n5.nabble.com/http4-component-remove-host-header-tp5742657.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: inProgressRepository Not clearing for items in idempotentRepository
Thanks. I've submitted a bug: https://issues.apache.org/jira/browse/CAMEL-6936 In the meantime, do you have any alternative recommendations for my requirements? Basically, I want to consume files from an FTP server only if they are new or modified. I guess I would need to roll my own filter for this which implements the idempotent behavior? -- View this message in context: http://camel.465427.n5.nabble.com/inProgressRepository-Not-clearing-for-items-in-idempotentRepository-tp5742613p5742658.html Sent from the Camel - Users mailing list archive at Nabble.com.
Cannot create endpoint with camel printer component
I have tested several lpr uri schemas but I cannot create an endpoint. I get the following message. Reason: javax.print.PrintException: No printer found with name: \\10.250.10.149:9001\hp9000. Please verify that the host and printer are registered and reachable from this machine. The uri is: lpr://10.250.10.149/hp9000. The printer exists and telnet to this ip and port works fine. -- View this message in context: http://camel.465427.n5.nabble.com/Cannot-create-endpoint-with-camel-printer-component-tp5742654.html Sent from the Camel - Users mailing list archive at Nabble.com.
Camel 2.12.1 org.xml.sax.SAXParseException
Hi, in camel 2.12.1 when split a message this kind of xml tag doesn't work, meanwhile in camel 2.11.0 works fine: tag attribute=attribute/ org.xml.sax.SAXParseException; XML documents structures must start and end within the same entity. If I add: tag attribute=attribute/tag works fine. Is suppose to be the expected behaviour? Thanks in advance! -- View this message in context: http://camel.465427.n5.nabble.com/Camel-2-12-1-org-xml-sax-SAXParseException-tp5742659.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Forcing file write to complete without stream caching
If you are talking about how to not pickup new files in a Camel from route, then take a look at the various read lock documentation on the file component. On Tue, Nov 5, 2013 at 2:39 PM, Ben Hood 0x6e6...@gmail.com wrote: Hi, In my first attempt to use Camel I’ve run into a intra-route timing issue that I’ve only solved with a hack, so I was wondering whether there are any best practices of dealing with timing issues when dealing with multiple processing steps in a batch file pipeline. Basically I am trying to avoid doing an HTTP POST with an empty payload, since the route performing the HTTP POST is triggered before the file that it is wired to upload has been written. If I turn stream caching on, this problem goes away. However, since some files can be quite big, I’d prefer not to have do stream caching. So to solve the issue, I’ve written a workaround bean that just does a Thread.sleep() in order to wait for the upload file to actually get some data in it before firing off the HTTP POST. I’ve got a two step pipeline that: 1. Transcodes a batch input file into an intermediate format (using msgpack serialization); 2. Performs an HTTP POST of the intermediate format to a remote server; I’d like to keep the intermediate format around on disk for debugging and manual replay tasks. My camel context has two routes: route id=“transcode-to-msgpack from uri=file:/tmp/d/ log message=Transcoding ${file:name} to msgpack / to uri=bean:transcoder/ to uri=file:/tmp/b?fileName=${file:name.noext}.msgpack/ /route route id=“post-msgpack-payload from uri=file:/tmp/b/ from uri=file:/tmp/e/ log message=POSTing ${file:name} to the rating API / setProperty propertyName=url.template constanthttp://localhost:/calls/:source/:sequence/constant /setProperty process ref=“httpDataPump/ /route I have two custom beans doing the work: 1. transcoder - This takes an InputStream, and returns an InputStream that wraps and transcodes the InputStream from the file; 2. httpDataPump - This contains an HTTP client that uploads the FileInputStream the the InMessage from the Exchange refers. Doing a Thread.sleep() seems like a real hack to me, so I was wondering if there is a more idiomatic way to solve the issue. I’ve looked into the preMoveNamePrefix options, but they appear to apply only to input files. Ideally I’m looking for something that can move the output file after it has been written. Any pointers are appreciated. Cheers, Ben -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: Basic Apache-Camel LoadBalancer-Failover Example
In the previous post I had 2 questions: 1. how to do this in java dsl 2. why are the mina servers sending replies. I will attack problem 1 eventually, but I just want to state that the solution to problem 2 is here: http://camel.465427.n5.nabble.com/Load-balancing-using-Mina-example-with-Java-DSL-td5742566.html#a5742585 Kudos to Mr. Claus for the answer and suggestions. -- View this message in context: http://camel.465427.n5.nabble.com/Basic-Apache-Camel-LoadBalancer-Failover-Example-tp5742551p5742662.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: count of processed messages when using aggregation
Aggregator has a CamelAggregatedSize, maybe try Simple[1] to set a header with the sum. [1] http://camel.apache.org/simple.html On 05 Nov 2013, at 15:48, Olaf omgolafg...@gmail.com wrote: Hello, thanks! CamelSplitSize is useful. I'd add then .to(remote) .choice() .when(property(CamelSplitComplete).isEqualTo(true)) .log(splitted ${property.CamelSplitSize} records) .otherwise() .log(lfile not completed yet); but, what if some messages fail or would be filtered and not sent to the remote system. How to sum them? -- View this message in context: http://camel.465427.n5.nabble.com/count-of-processed-messages-when-using-aggregation-tp5742649p5742655.html Sent from the Camel - Users mailing list archive at Nabble.com.
Join after a split
Hi I have a route which takes a database query result set, splits it by row, then proceses each row using another route with seda endpoint. Something like: route id=synch-rm autoStartup=true descriptionGet updates from dhis db and post to registry/description to uri=sql:{{hmisdb.selectOus}}?dataSource=#dhisdb/ !-- process rows concurrently -- split simple${in.body}/simple to uri=seda:newFacility/ /split log message=Done/ /route Is there I can cause the main thread in this route to wait until all the seda:newFacility routes are complete before getting to the log message? Bob
Re: Camel 2.12.1 org.xml.sax.SAXParseException
Can you show some of the route code you do for the splitting? On Tue, Nov 5, 2013 at 4:05 PM, Cecilio Alvarez cecilio.alva...@hotmail.com wrote: Hi, in camel 2.12.1 when split a message this kind of xml tag doesn't work, meanwhile in camel 2.11.0 works fine: tag attribute=attribute/ org.xml.sax.SAXParseException; XML documents structures must start and end within the same entity. If I add: tag attribute=attribute/tag works fine. Is suppose to be the expected behaviour? Thanks in advance! -- View this message in context: http://camel.465427.n5.nabble.com/Camel-2-12-1-org-xml-sax-SAXParseException-tp5742659.html Sent from the Camel - Users mailing list archive at Nabble.com. -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: Join after a split
Hi See this EIP http://camel.apache.org/composed-message-processor.html about _only using a splitter_ On Tue, Nov 5, 2013 at 4:54 PM, Bob Jolliffe bobjolli...@gmail.com wrote: Hi I have a route which takes a database query result set, splits it by row, then proceses each row using another route with seda endpoint. Something like: route id=synch-rm autoStartup=true descriptionGet updates from dhis db and post to registry/description to uri=sql:{{hmisdb.selectOus}}?dataSource=#dhisdb/ !-- process rows concurrently -- split simple${in.body}/simple to uri=seda:newFacility/ /split log message=Done/ /route Is there I can cause the main thread in this route to wait until all the seda:newFacility routes are complete before getting to the log message? Bob -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: Forcing file write to complete without stream caching
Hey Claus, Having to acquire a lock on the file sounds like a good way to implement the don't start attempting to read an empty file semantics I'm looking for. Having said that, the documentation on read locks is somewhat misleading. It notes a boolean URI parameter called consumer.exclusiveReadLock. The configuration processor doesn't seem to consider this to be an acceptable option - maybe I'm doing something wrong. So turning to the source, the FileProcessStrategyFactory appears to accept a flag called readLock, which can be either none, markerFile, fileLock, rename or changed. So I went for fileLock. However, it seems that this strategy is not scoped on an individual endpoint, rather it appears to be set globally for the entire camel context (I gained this impression by debugging the 2.12.1 release). So it seems that which ever file endpoint is processed first sets the strategy for the entire context. Or am I missing the point? Cheers, Ben On Nov 5, 2013, at 15:19, Claus Ibsen claus.ib...@gmail.com wrote: If you are talking about how to not pickup new files in a Camel from route, then take a look at the various read lock documentation on the file component. On Tue, Nov 5, 2013 at 2:39 PM, Ben Hood 0x6e6...@gmail.com wrote: Hi, In my first attempt to use Camel I’ve run into a intra-route timing issue that I’ve only solved with a hack, so I was wondering whether there are any best practices of dealing with timing issues when dealing with multiple processing steps in a batch file pipeline. Basically I am trying to avoid doing an HTTP POST with an empty payload, since the route performing the HTTP POST is triggered before the file that it is wired to upload has been written. If I turn stream caching on, this problem goes away. However, since some files can be quite big, I’d prefer not to have do stream caching. So to solve the issue, I’ve written a workaround bean that just does a Thread.sleep() in order to wait for the upload file to actually get some data in it before firing off the HTTP POST. I’ve got a two step pipeline that: 1. Transcodes a batch input file into an intermediate format (using msgpack serialization); 2. Performs an HTTP POST of the intermediate format to a remote server; I’d like to keep the intermediate format around on disk for debugging and manual replay tasks. My camel context has two routes: route id=“transcode-to-msgpack from uri=file:/tmp/d/ log message=Transcoding ${file:name} to msgpack / to uri=bean:transcoder/ to uri=file:/tmp/b?fileName=${file:name.noext}.msgpack/ /route route id=“post-msgpack-payload from uri=file:/tmp/b/ from uri=file:/tmp/e/ log message=POSTing ${file:name} to the rating API / setProperty propertyName=url.template constanthttp://localhost:/calls/:source/:sequence/constant /setProperty process ref=“httpDataPump/ /route I have two custom beans doing the work: 1. transcoder - This takes an InputStream, and returns an InputStream that wraps and transcodes the InputStream from the file; 2. httpDataPump - This contains an HTTP client that uploads the FileInputStream the the InMessage from the Exchange refers. Doing a Thread.sleep() seems like a real hack to me, so I was wondering if there is a more idiomatic way to solve the issue. I’ve looked into the preMoveNamePrefix options, but they appear to apply only to input files. Ideally I’m looking for something that can move the output file after it has been written. Any pointers are appreciated. Cheers, Ben -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: Forcing file write to complete without stream caching
That's the old file component, have a look at file2. http://camel.apache.org/file2.html On 05 Nov 2013, at 20:36, Ben Hood 0x6e6...@gmail.com wrote: Hey Claus, Having to acquire a lock on the file sounds like a good way to implement the don't start attempting to read an empty file semantics I'm looking for. Having said that, the documentation on read locks is somewhat misleading. It notes a boolean URI parameter called consumer.exclusiveReadLock. The configuration processor doesn't seem to consider this to be an acceptable option - maybe I'm doing something wrong. So turning to the source, the FileProcessStrategyFactory appears to accept a flag called readLock, which can be either none, markerFile, fileLock, rename or changed. So I went for fileLock. However, it seems that this strategy is not scoped on an individual endpoint, rather it appears to be set globally for the entire camel context (I gained this impression by debugging the 2.12.1 release). So it seems that which ever file endpoint is processed first sets the strategy for the entire context. Or am I missing the point? Cheers, Ben On Nov 5, 2013, at 15:19, Claus Ibsen claus.ib...@gmail.com wrote: If you are talking about how to not pickup new files in a Camel from route, then take a look at the various read lock documentation on the file component. On Tue, Nov 5, 2013 at 2:39 PM, Ben Hood 0x6e6...@gmail.com wrote: Hi, In my first attempt to use Camel I’ve run into a intra-route timing issue that I’ve only solved with a hack, so I was wondering whether there are any best practices of dealing with timing issues when dealing with multiple processing steps in a batch file pipeline. Basically I am trying to avoid doing an HTTP POST with an empty payload, since the route performing the HTTP POST is triggered before the file that it is wired to upload has been written. If I turn stream caching on, this problem goes away. However, since some files can be quite big, I’d prefer not to have do stream caching. So to solve the issue, I’ve written a workaround bean that just does a Thread.sleep() in order to wait for the upload file to actually get some data in it before firing off the HTTP POST. I’ve got a two step pipeline that: 1. Transcodes a batch input file into an intermediate format (using msgpack serialization); 2. Performs an HTTP POST of the intermediate format to a remote server; I’d like to keep the intermediate format around on disk for debugging and manual replay tasks. My camel context has two routes: route id=“transcode-to-msgpack from uri=file:/tmp/d/ log message=Transcoding ${file:name} to msgpack / to uri=bean:transcoder/ to uri=file:/tmp/b?fileName=${file:name.noext}.msgpack/ /route route id=“post-msgpack-payload from uri=file:/tmp/b/ from uri=file:/tmp/e/ log message=POSTing ${file:name} to the rating API / setProperty propertyName=url.template constanthttp://localhost:/calls/:source/:sequence/constant /setProperty process ref=“httpDataPump/ /route I have two custom beans doing the work: 1. transcoder - This takes an InputStream, and returns an InputStream that wraps and transcodes the InputStream from the file; 2. httpDataPump - This contains an HTTP client that uploads the FileInputStream the the InMessage from the Exchange refers. Doing a Thread.sleep() seems like a real hack to me, so I was wondering if there is a more idiomatic way to solve the issue. I’ve looked into the preMoveNamePrefix options, but they appear to apply only to input files. Ideally I’m looking for something that can move the output file after it has been written. Any pointers are appreciated. Cheers, Ben -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: Camel 2.12.1, Spring Framework 3.2.3, Jboss 6.0.0 Deployment Problem
Check you don't have a library/version miss match. It looks like you also use Camel 2.7.0... CamelNamespaceHandler.java:169)[:2.7.0] Best, Christian - Software Integration Specialist Apache Member V.P. Apache Camel | Apache Camel PMC Member | Apache Camel committer Apache Incubator PMC Member https://www.linkedin.com/pub/christian-mueller/11/551/642 On Mon, Nov 4, 2013 at 10:46 PM, rtmacphail rtmacph...@gmail.com wrote: I have two web applications which communicate with each other using JMS messaging. The JMS queues are managed by a single ActiveMQ broker and the applications will use Apache Camel to send and receive messages to the queues. One application uses Camel 2.7.0 and Spring Framework 3.0.5. This app deploys successfully on JBoss 6. The second application uses Camel 2.12.1 and Spring Framework 3.2.3. It deploys successfully on Jetty, but will not run on JBoss 6. I get the following exception: Failed to parse JAXB element; nested exception is javax.xml.bind.UnmarshalException: unexpected element (uri:http://camel.apache.org/schema/spring;, local:camelContext). Expected elements are {}aggregate,{}aop,{}avro,{}base64,{}batchResequencerConfig,{}bean,{}beanPostProcessor,{}beanio,{}bindy,{}camelContext,{}castor,{}choice,{}constant,{}consumerTemplate, {}contextScan,{}convertBodyTo,{}crypto,{}csv,{}customDataFormat,{}customLoadBalancer,{}dataFormats,{}delay,{}description,{}doCatch,{}doFinally,{}doTry,{}dynamicRouter,{}el, {}endpoint,{}enrich,{}errorHandler,{}export,{}expression,{}expressionDefinition,{}failover,{}filter,{}flatpack,{}from,{}groovy,{}gzip,{}header,{}hl7,{}idempotentConsumer, {}inOnly,{}inOut,{}intercept,{}interceptFrom,{}interceptToEndpoint,{}javaScript,{}jaxb,{}jibx,{}jmxAgent,{}json,{}jxpath,{ http://camel.apache.org/schema/spring}keyStoreParameters ,{}language,{}loadBalance,{}log,{}loop,{}marshal,{}method,{}multicast,{}mvel,{}ognl,{}onCompletion,{}onException,{}optimisticLockRetryPolicy,{}otherwise,{}packageScan,{}p gp,{}php,{}pipeline,{}policy,{}pollEnrich,{}process,{}properties,{}property,{}propertyPlaceholder,{}protobuf,{}proxy,{}python,{}random,{}recipientList,{}redeliveryPolicy, {}redeliveryPolicyProfile,{}ref,{}removeHeader,{}removeHeaders,{}removeProperty,{}resequence,{}rollback,{}roundRobin,{}route,{}routeBuilder,{}routeContext,{}routeContextRef,{}r outes,{}routingSlip,{}rss,{}ruby,{}sample,{ http://camel.apache.org/schema/spring}secureRandomParameters ,{}secureXML,{}serialization,{}setBody,{}setExchangePattern,{}setFaultBody, {}setHeader,{}setOutHeader,{}setProperty,{}simple,{}soapjaxb,{}sort,{}spel,{}split,{}sql,{ http://camel.apache.org/schema/spring}sslContextParameters ,{}sticky,{}stop,{}streamCac hing,{}streamResequencerConfig,{}string,{}syslog,{}template,{}threadPool,{}threadPoolProfile,{}threads,{}throttle,{}throwException,{}tidyMarkup,{}to,{}tokenize,{}topic,{}tr ansacted,{}transform,{}unmarshal,{}validate,{}vtdxml,{}weighted,{}when,{}wireTap,{}xmlBeans,{}xmljson,{}xmlrpc,{}xpath,{}xquery,{}xstream,{}zip,{}zipFile at org.apache.camel.spring.handler.CamelNamespaceHandler.parseUsingJaxb(CamelNamespaceHandler.java:169) [:2.7.0] The following is my active mq broker and camel context configurations ?xml version=1.0 encoding=UTF-8? beans xmlns=http://www.springframework.org/schema/beans; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xmlns:camel=http://camel.apache.org/schema/spring; xsi:schemaLocation= http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd; bean id=jbossResolver class=org.apachextras.camel.jboss.JBossPackageScanClassResolver/ camel:camelContext id=rmsCamelContext camel:packagepeigov.rms.jms/camel:package /camel:camelContext /beans ?xml version=1.0 encoding=UTF-8? beans xmlns=http://www.springframework.org/schema/beans; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation= http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd; bean id=jmsConnectionFactory class=org.apache.activemq.ActiveMQConnectionFactory property name=brokerURL value=tcp://localhost:61616 / /bean bean id=pooledConnectionFactory class=org.apache.activemq.pool.PooledConnectionFactory property name=maxConnections value=8 / property name=maximumActive value=500 / property name=connectionFactory ref=jmsConnectionFactory / /bean bean id=jmsConfig
Re: jibx dataformat does not support setting bindingname
I'm afraid this is not supported yet. But it looks easy, as you already mentioned, and could be added easily. Do you consider to provide a patch [1]? [1] http://camel.apache.org/contributing.html Best, Christian - Software Integration Specialist Apache Member V.P. Apache Camel | Apache Camel PMC Member | Apache Camel committer Apache Incubator PMC Member https://www.linkedin.com/pub/christian-mueller/11/551/642 On Tue, Nov 5, 2013 at 12:24 AM, netminkey pedro_mucha...@yahoo.com wrote: The JiBX dataformat does not seem to allow setting the binding name on the marshaller/unmarshaller. Normally, you'd have the option of calling BindingDirectory.getFactory(bindingName, class), but this doesn't seem to be available in marshal().jibx(). Is there another way to do this? -- View this message in context: http://camel.465427.n5.nabble.com/jibx-dataformat-does-not-support-setting-bindingname-tp5742615.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: camel vs spring integration
Have a look at the following links: http://forum.spring.io/forum/spring-projects/integration/118876-spring-integration-2-1-request-reply-benchmark-tests-showed-very-poor-performance http://camel.465427.n5.nabble.com/fyi-SI-td5716049.html Best, Christian - Software Integration Specialist Apache Member V.P. Apache Camel | Apache Camel PMC Member | Apache Camel committer Apache Incubator PMC Member https://www.linkedin.com/pub/christian-mueller/11/551/642 On Sat, Nov 2, 2013 at 3:24 PM, Robert James Liguori glies...@yahoo.comwrote: Hello folks, On the Code Ranch (formerly the Java Ranch), a question was posted relative to the Apache Camel Components Poster promotion, of which I cannot answer: Here is the question: Camel seems to offer a better developer experience (via the Fluent API) than Spring Integration; and certainly offers more connector options. How does it compare in throughput? Specifically, assuming String JMS messages of 1-10k in/out, Spring Integration + JiBX marshalling/unmarshalling can get me throughput of 1000's of messages per second. Can Camel compete at that level? Years back, I remember there being some issues with Camel/ServiceMix throughput being very slow (10's to 100's of messages per second). If anyone can answer this, can you please drop by the Other Open Source Projects of the Code Ranch forum and give a helpful response? Btw, here is the direct URL: http://www.coderanch.com/t/622920/open-source/camel-spring-integration#2848108 Thank you so much, Robert Liguori
Re: Is it better to use PooledConnectionFactory in Parallel Processing
In general, you should use the PooledConnectionFactory. The connections are created upfront which saves time. The connections are also reused which is more resource friendly. Best, Christian - Software Integration Specialist Apache Member V.P. Apache Camel | Apache Camel PMC Member | Apache Camel committer Apache Incubator PMC Member https://www.linkedin.com/pub/christian-mueller/11/551/642 On Tue, Oct 29, 2013 at 12:10 PM, Dayakar daya.kond...@gmail.com wrote: Hi, Previously we are using ActiveMQConnectionFactory alone, and we observed that while starting the JBoss Server ActiveMQ is creating multiple threads and closing (ActiveMQ Task 1, ActiveMQ Task 2, ...), we thought it creating multiple connections/session are creating and closing. As a resolution, we want to use PooledConnectionFactory 1) We are using Camel Routes to prepare and send messages to destination, we invoke 7 routes and each route configured with parallel processing (Min value is 5 and max value is 25 threads), daily we process around 3 Lakh Messages (= 300.000) and All Routes are TRANSACTED. In Parallel Processing Is it better to use PooledConnectionFactory? For Each Message PooledConnectionFactory create one session (or) Session object will be shared by Multiple messages? -- View this message in context: http://camel.465427.n5.nabble.com/Is-it-better-to-use-PooledConnectionFactory-in-Parallel-Processing-tp5742350.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: inProgressRepository Not clearing for items in idempotentRepository
I worked around this by removing the idempotent configuration and instead writing a GenericFileFilter which basically copies the idempotent repo's behavior. This is working great. -- View this message in context: http://camel.465427.n5.nabble.com/inProgressRepository-Not-clearing-for-items-in-idempotentRepository-tp5742613p5742676.html Sent from the Camel - Users mailing list archive at Nabble.com.
How to keep route running on camel with JavaDSL?
I have code a loadbalancer that generates a report every 10 seconds and sends it to a MINA server on localhost:9991 or to a MINA server on localhost:9992 should the first one fail. Once the MINA servers receive the report, they change it and send it back to the loadbalancer, which then print the body of the report. public class MyApp_A { public static void main(String... args) throws Exception { // create CamelContext CamelContext context = new DefaultCamelContext(); context.addRoutes( new RouteBuilder(){ @Override public void configure() throws Exception { from(timer://org.apache.camel.example.loadbalancer?period=10s) .beanRef(service.Generator, createReport) .to(direct:loadbalance); from(direct:loadbalance) .loadBalance().failover() // will send to A first, and if fails then send to B afterwards .to(mina:tcp://localhost:9991?sync=true) .to(mina:tcp://localhost:9992?sync=true) .log(${body}) .end(); } } ); context.start(); } } However, once I execute the loadbalancer it finishes immediately. I tried fixing this problem by adding an infinite loop after the context.start() call, however this is a terrible solution because the program will simply get stuck on the loop and will stop generating reports and sending them. How do I fix this? How do I keep my loadbalancer running while being able to generate requests and print the reports that it receives? -- View this message in context: http://camel.465427.n5.nabble.com/How-to-keep-route-running-on-camel-with-JavaDSL-tp5742677.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: How to keep route running on camel with JavaDSL?
Hi Yeah check this page http://camel.apache.org/running-camel-standalone.html And the cookbook example it refers to. On Tue, Nov 5, 2013 at 11:19 PM, pmp.martins pmp.mart...@campus.fct.unl.pt wrote: I have code a loadbalancer that generates a report every 10 seconds and sends it to a MINA server on localhost:9991 or to a MINA server on localhost:9992 should the first one fail. Once the MINA servers receive the report, they change it and send it back to the loadbalancer, which then print the body of the report. public class MyApp_A { public static void main(String... args) throws Exception { // create CamelContext CamelContext context = new DefaultCamelContext(); context.addRoutes( new RouteBuilder(){ @Override public void configure() throws Exception { from(timer://org.apache.camel.example.loadbalancer?period=10s) .beanRef(service.Generator, createReport) .to(direct:loadbalance); from(direct:loadbalance) .loadBalance().failover() // will send to A first, and if fails then send to B afterwards .to(mina:tcp://localhost:9991?sync=true) .to(mina:tcp://localhost:9992?sync=true) .log(${body}) .end(); } } ); context.start(); } } However, once I execute the loadbalancer it finishes immediately. I tried fixing this problem by adding an infinite loop after the context.start() call, however this is a terrible solution because the program will simply get stuck on the loop and will stop generating reports and sending them. How do I fix this? How do I keep my loadbalancer running while being able to generate requests and print the reports that it receives? -- View this message in context: http://camel.465427.n5.nabble.com/How-to-keep-route-running-on-camel-with-JavaDSL-tp5742677.html Sent from the Camel - Users mailing list archive at Nabble.com. -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Re: Forcing file write to complete without stream caching
Hey guys, Thanks a lot for your help. So by reading the file2 documentation instead of the file documentation, I was able to solve my problem by adding doneFileName=${file:name.noext}.done to the uri for the file being written and also to the uri of the route that was attempting the subsequent read. This allows me to get rid of the hack I put into the processor component to wait for the file to receive data. For anybody reading this thread, the relevant documentation on this feature is under the headings Using done files and Writing done files. Thanks for all of your help, Cheers, Ben On Tue, Nov 5, 2013 at 7:03 PM, Taariq Levack taar...@gmail.com wrote: That's the old file component, have a look at file2. http://camel.apache.org/file2.html On 05 Nov 2013, at 20:36, Ben Hood 0x6e6...@gmail.com wrote: Hey Claus, Having to acquire a lock on the file sounds like a good way to implement the don't start attempting to read an empty file semantics I'm looking for. Having said that, the documentation on read locks is somewhat misleading. It notes a boolean URI parameter called consumer.exclusiveReadLock. The configuration processor doesn't seem to consider this to be an acceptable option - maybe I'm doing something wrong. So turning to the source, the FileProcessStrategyFactory appears to accept a flag called readLock, which can be either none, markerFile, fileLock, rename or changed. So I went for fileLock. However, it seems that this strategy is not scoped on an individual endpoint, rather it appears to be set globally for the entire camel context (I gained this impression by debugging the 2.12.1 release). So it seems that which ever file endpoint is processed first sets the strategy for the entire context. Or am I missing the point? Cheers, Ben On Nov 5, 2013, at 15:19, Claus Ibsen claus.ib...@gmail.com wrote: If you are talking about how to not pickup new files in a Camel from route, then take a look at the various read lock documentation on the file component. On Tue, Nov 5, 2013 at 2:39 PM, Ben Hood 0x6e6...@gmail.com wrote: Hi, In my first attempt to use Camel I’ve run into a intra-route timing issue that I’ve only solved with a hack, so I was wondering whether there are any best practices of dealing with timing issues when dealing with multiple processing steps in a batch file pipeline. Basically I am trying to avoid doing an HTTP POST with an empty payload, since the route performing the HTTP POST is triggered before the file that it is wired to upload has been written. If I turn stream caching on, this problem goes away. However, since some files can be quite big, I’d prefer not to have do stream caching. So to solve the issue, I’ve written a workaround bean that just does a Thread.sleep() in order to wait for the upload file to actually get some data in it before firing off the HTTP POST. I’ve got a two step pipeline that: 1. Transcodes a batch input file into an intermediate format (using msgpack serialization); 2. Performs an HTTP POST of the intermediate format to a remote server; I’d like to keep the intermediate format around on disk for debugging and manual replay tasks. My camel context has two routes: route id=“transcode-to-msgpack from uri=file:/tmp/d/ log message=Transcoding ${file:name} to msgpack / to uri=bean:transcoder/ to uri=file:/tmp/b?fileName=${file:name.noext}.msgpack/ /route route id=“post-msgpack-payload from uri=file:/tmp/b/ from uri=file:/tmp/e/ log message=POSTing ${file:name} to the rating API / setProperty propertyName=url.template constanthttp://localhost:/calls/:source/:sequence/constant /setProperty process ref=“httpDataPump/ /route I have two custom beans doing the work: 1. transcoder - This takes an InputStream, and returns an InputStream that wraps and transcodes the InputStream from the file; 2. httpDataPump - This contains an HTTP client that uploads the FileInputStream the the InMessage from the Exchange refers. Doing a Thread.sleep() seems like a real hack to me, so I was wondering if there is a more idiomatic way to solve the issue. I’ve looked into the preMoveNamePrefix options, but they appear to apply only to input files. Ideally I’m looking for something that can move the output file after it has been written. Any pointers are appreciated. Cheers, Ben -- Claus Ibsen - Red Hat, Inc. Email: cib...@redhat.com Twitter: davsclaus Blog: http://davsclaus.com Author of Camel in Action: http://www.manning.com/ibsen
Camel-rabbitmq - how to handle endpoint not available?
I'm trying to use Camel embedded into an application (Spring configured) to push files from a local directory to an instance of RabbitMQ. It's been working well so far, except in the case where RabbitMQ is not available for some reason (network outage, maintenance, etc.), and our application is started. In this case, Camel is unable to start because the endpoint is not available (FailedToCreateProducerException resulting from a connection refused). Is it possible to recover from such an error? The ideal situation would be to allow a configuration where redelivery attempts could be made in the hope that RabbitMQ will become available again. Our application that is using Camel is deployed on several hundred workstations and it would be necessary to configure Camel in such a way that recovery (reconnect to Rabbit) would be automatic. Any ideas? There doesn't appear to be a way to configure Camel using the available Exception handling strategies to operate in this manner, although my guess is that I've missed something. Thanks! - Matt
Re: How to Deploy new Camel Context with Hawtio
Here is what i did. I took the jar file from here: http://repo1.maven.org/maven2/io/hawt/hawtio-watcher-spring-context/1.2-M10/hawtio-watcher-spring-context-1.2-M10.jar then I added it to the WB-INF/lib folder of the hawtio sample war file. I deployed and created new routes in wiki/camel-spring.xml. Still nothing is changing in the deployed camel context. I still see only the default two routes. Then I added a route to wiki/camel.xml. Still i don't see them appearing. Please let me know what I am missing. -- View this message in context: http://camel.465427.n5.nabble.com/How-to-Deploy-new-Camel-Context-with-Hawtio-tp5742571p5742682.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: How to Deploy new Camel Context with Hawtio
Also, to make sure it is not a compatibility issue, I took https://oss.sonatype.org/content/repositories/public/io/hawt/sample/1.2-M27/sample-1.2-M27.war, and inserted the following JAR into its WEB-INF/lib folder: https://oss.sonatype.org/content/repositories/public/io/hawt/hawtio-watcher-spring-context/1.2-M27/hawtio-watcher-spring-context-1.2-M27.jar I also created an additional camel context /sping/mypersonal-camelcontext.xml. I declared a camel context in this file with a different id. Still no change in the deployed context. -- View this message in context: http://camel.465427.n5.nabble.com/How-to-Deploy-new-Camel-Context-with-Hawtio-tp5742571p5742685.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: How to keep route running on camel with JavaDSL?
When you start the camel route in the main, you need to add a sleep to block the main thread from exit. BTW, you can use other tools that camel provides to running the camel route as Claus just showed you. -- Willem Jiang Red Hat, Inc. Web: http://www.redhat.com Blog: http://willemjiang.blogspot.com (http://willemjiang.blogspot.com/) (English) http://jnn.iteye.com (http://jnn.javaeye.com/) (Chinese) Twitter: willemjiang Weibo: 姜宁willem On Wednesday, November 6, 2013 at 6:19 AM, pmp.martins wrote: I have code a loadbalancer that generates a report every 10 seconds and sends it to a MINA server on localhost:9991 or to a MINA server on localhost:9992 should the first one fail. Once the MINA servers receive the report, they change it and send it back to the loadbalancer, which then print the body of the report. public class MyApp_A { public static void main(String... args) throws Exception { // create CamelContext CamelContext context = new DefaultCamelContext(); context.addRoutes( new RouteBuilder(){ @Override public void configure() throws Exception { from(timer://org.apache.camel.example.loadbalancer?period=10s) .beanRef(service.Generator, createReport) .to(direct:loadbalance); from(direct:loadbalance) .loadBalance().failover() // will send to A first, and if fails then send to B afterwards .to(mina:tcp://localhost:9991?sync=true) .to(mina:tcp://localhost:9992?sync=true) .log(${body}) .end(); } } ); context.start(); } } However, once I execute the loadbalancer it finishes immediately. I tried fixing this problem by adding an infinite loop after the context.start() call, however this is a terrible solution because the program will simply get stuck on the loop and will stop generating reports and sending them. How do I fix this? How do I keep my loadbalancer running while being able to generate requests and print the reports that it receives? -- View this message in context: http://camel.465427.n5.nabble.com/How-to-keep-route-running-on-camel-with-JavaDSL-tp5742677.html Sent from the Camel - Users mailing list archive at Nabble.com (http://Nabble.com).
Transacted routes without using spring route builder?
Greetings, I have a route that needs to call a database, read some records, format them and put them into ActiveMQ. Naturally the best way to do this would be to wrap the whole thing in a transacted route. However, I do not want to use SpringXML as a route builder interface at all. I don't think its as expressive or flexible as the fluent builders. The application will be running in a WAR in JBoss EAP 6.1.1 and using servletcontextlistener to bootstrap camel. Although I might have to tolerate some print intrusion I am generally not a fan of IOC spring taking over whole applications. So what would be the best way to accomplish this? *Robert Simmons Jr. MSc.* *Author of: Hardcore Java (2003) and Maintainable Java (2012)* *LinkedIn: **http://www.linkedin.com/pub/robert-simmons/40/852/a39 http://www.linkedin.com/pub/robert-simmons/40/852/a39*
Best way to consume from seda by java
Hi all, Could you indicate me the best way to launch camel from java class and consume from a seda queue from the same class, in the same or different thread? Thanks a lot, Matteo.
Re: How to Deploy new Camel Context with Hawtio
As I said; the easiest thing is to just use Fuse Fabric as we've lots of documentation and out of the box examples that just work. However if you want to use the spring watcher directly try this: Try putting a spring camel XML file inside the spring folder (not /sping) within the git repo? e.g. here's a sample git repo for hawtio (which is used by default when you use hawtio, so there should be this file there already?): https://github.com/hawtio/hawtio-config here's the watched spring XMLs https://github.com/hawtio/hawtio-config/tree/master/spring So try edit that camel XML there? If you run hawtio via this command in the hawtio/hawtio-web directory (assuming you've setup your machine with npm / typescript: https://github.com/hawtio/hawtio/blob/master/BUILDING.md#installing-local-dependencies ) mvn test-compile exec:java then you will be able to edit this camel XML file via this URL: http://localhost:8080/hawtio/#/wiki/branch/master/camel/canvas/spring/camel-spring.xml when you hit save, you'll see in the log the route change (e.g. try edit the Log statement to make it even more obvious). Or change this blueprint XML file to watch the entire git repo if you refer; e.g. change this to ${hawtio.config.dir} instead of ${hawtio.config.dir}/spring https://github.com/hawtio/hawtio/blob/master/hawtio-watcher-spring-context/src/main/resources/OSGI-INF/blueprint/blueprint.xml#L7-7 more detail on hawtio config here: http://hawt.io/configuration/index.html On 6 November 2013 01:00, Klaus777 max.bridgewa...@gmail.com wrote: Also, to make sure it is not a compatibility issue, I took https://oss.sonatype.org/content/repositories/public/io/hawt/sample/1.2-M27/sample-1.2-M27.war , and inserted the following JAR into its WEB-INF/lib folder: https://oss.sonatype.org/content/repositories/public/io/hawt/hawtio-watcher-spring-context/1.2-M27/hawtio-watcher-spring-context-1.2-M27.jar I also created an additional camel context /sping/mypersonal-camelcontext.xml. I declared a camel context in this file with a different id. Still no change in the deployed context. -- View this message in context: http://camel.465427.n5.nabble.com/How-to-Deploy-new-Camel-Context-with-Hawtio-tp5742571p5742685.html Sent from the Camel - Users mailing list archive at Nabble.com. -- James --- Red Hat Email: jstra...@redhat.com Web: http://fusesource.com Twitter: jstrachan, fusenews Blog: http://macstrac.blogspot.com/ Open Source Integration