Re: Spring Boot and Camel 2.17.0
You need to add the camel bom See this example https://github.com/apache/camel/blob/master/examples/camel-example-spring-boot-starter/pom.xml On Tue, Mar 29, 2016 at 2:19 AM, zpyoung wrote: > For some reason when I use the following pom.xml configuration I'm getting > the following version displaying in the console. Why is 2.15.4 displaying > when I'm trying to use 2.17.0. > > 2016-03-28 19:09:06.452 INFO 14516 --- [ main] > o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase > 2147483647 > 2016-03-28 19:09:06.469 INFO 14516 --- [ main] > o.a.camel.spring.boot.RoutesCollector: Loading additional Camel XML > routes from: classpath:camel/*.xml > 2016-03-28 19:09:06.470 INFO 14516 --- [ main] > o.a.camel.spring.boot.RoutesCollector: Loading additional Camel XML > rests from: classpath:camel-rest/*.xml > 2016-03-28 19:09:06.471 INFO 14516 --- [ main] > o.a.camel.spring.SpringCamelContext : Apache Camel 2.15.4 > (CamelContext: docker-camel-retry) is starting > 2016-03-28 19:09:06.473 INFO 14516 --- [ main] > o.a.c.m.ManagedManagementStrategy: JMX is enabled > 2016-03-28 19:09:06.635 INFO 14516 --- [ main] > o.a.camel.spring.SpringCamelContext : AllowUseOriginalMessage is > enabled. If access to the original message is not needed, then its > recommended to turn this option off as it may improve performance. > 2016-03-28 19:09:06.635 INFO 14516 --- [ main] > o.a.camel.spring.SpringCamelContext : StreamCaching is not in use. If > using streams then its recommended to enable stream caching. See more > details at http://camel.apache.org/stream-caching.html > 2016-03-28 19:09:06.636 INFO 14516 --- [ main] > o.a.camel.spring.SpringCamelContext : Total 0 routes, of which 0 is > started. > 2016-03-28 19:09:06.638 INFO 14516 --- [ main] > o.a.camel.spring.SpringCamelContext : Apache Camel 2.15.4 > (CamelContext: docker-camel-retry) started in 0.166 seconds > > pom.xml > > http://maven.apache.org/POM/4.0.0"; > xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; > xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 > http://maven.apache.org/xsd/maven-4.0.0.xsd";> > 4.0.0 > > com.zandroid > dockercamelretry > 0.0.1-SNAPSHOT > jar > > docker-camel-retry > Demo project for Spring Boot > > > org.springframework.boot > spring-boot-starter-parent > 1.3.3.RELEASE > > > > > UTF-8 > 1.8 > > > > > > org.springframework.boot > spring-boot-starter-web > > > org.springframework.boot > spring-boot-starter-actuator > > > org.apache.activemq > activemq-camel > > > org.apache.camel > camel-spring-boot-starter > 2.17.0 > > > > > org.projectlombok > lombok > 1.16.4 > provided > > > > org.springframework.boot > spring-boot-starter-test > test > > > > > > > org.springframework.boot > spring-boot-maven-plugin > > > > > > > > > > > > -- > View this message in context: > http://camel.465427.n5.nabble.com/Spring-Boot-and-Camel-2-17-0-tp5779917.html > Sent from the Camel - Users mailing list archive at Nabble.com. -- Claus Ibsen - http://davsclaus.com @davsclaus Camel in Action 2: https://www.manning.com/ibsen2
Re: Ldap query exception
My route and bean configs: http://0.0.0.0:18181/smg/processes/bdc/"; serviceClass="ar.com.smg.sgi.esb.proxyrightnow.model.TokenService" /> [[RN_KEY]] [[URL_RIGHTNOW]] ${body} [[GROUP_ROL]] -- View this message in context: http://camel.465427.n5.nabble.com/Ldap-query-exception-tp5779904p5779908.html Sent from the Camel - Users mailing list archive at Nabble.com.
camel-kafka 2.17.0
I am using camel-kafka 2.17.0 component and giving below error if I user camel-kafka 2.16.2 it works fine. Please help me to fix Error : 2016-03-28 15:04:41.053 ERROR [RdluRestService,,,] 6636 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender : Uncaught error in kafka producer I/O thread: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'throttle_time_ms': java.nio.BufferUnderflowException at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71) ~[kafka-clients-0.9.0.1.jar:na] at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439) ~[kafka-clients-0.9.0.1.jar:na] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265) ~[kafka-clients-0.9.0.1.jar:na] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) ~[kafka-clients-0.9.0.1.jar:na] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) ~[kafka-clients-0.9.0.1.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] -- View this message in context: http://camel.465427.n5.nabble.com/camel-kafka-2-17-0-tp5779911.html Sent from the Camel - Users mailing list archive at Nabble.com.
Camel reconnectDelay and maximumReconnectAttempts not working with ConsumerTemplete
I have Quartz Cron job that runs on some schedule and i am using Camel Consumer, Producer template to pickup some files from FTP site and copy it local file system. When using reconnectDelay=6 and maximumReconnectAttempts=3,throwExceptionOnConnectFailed=true with FTP consumer it doesn't try to reconnect, i get null for exchange and consumer stops. Code snippet public class ScheduleProcessorJob extends QuartzJobBean { @Autowired private ProducerTemplate producerTemplate; @Autowired private ConsumerTemplate consumerTemplate; protected void executeInternal(JobExecutionContext jobContext) throws JobExecutionException { consumerTemplate.start(); /* Sample srcEndpoint "ftp://batch.com:10021//inbox?delete=true&throwExceptionOnConnectFailed=true&binary=true&connectTimeout=3&maximumReconnectAttempts=3&reconnectDelay=6&passiveMode=true&password=&readLock=changed&username=xx&flatten=true&recursive=false"; */ // loop to get all the files in the remote site while (true) { Exchange remoteExchange = consumerTemplate.receive(srcEndPoint.toString(), 5000); if (remoteExchange == null) { break; } Exchange postExchange = producerTemplate.send(destEndPoint.toString(),remoteExchange); /* In case on error * */ if(null != postExchange.getException()){ throw postExchange.getException(); } } } catch (Exception ex){ LOGGER.error("Error Picking up from Customer ",ex); }finally{ try { consumerTemplate.stop(); } catch (Exception ex) { LOGGER.error("Error Stopping Consumer template ",ex); } } } -- View this message in context: http://camel.465427.n5.nabble.com/Camel-reconnectDelay-and-maximumReconnectAttempts-not-working-with-ConsumerTemplete-tp5779910.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: camel-kafka 2.17.0
working fine after changing kafka 0.9.0.1 -- View this message in context: http://camel.465427.n5.nabble.com/camel-kafka-2-17-0-tp5779911p5779912.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: camel-Kafka producer re-tries
thanks i will check -- View this message in context: http://camel.465427.n5.nabble.com/camel-Kafka-producer-re-tries-tp5779900p5779913.html Sent from the Camel - Users mailing list archive at Nabble.com.
Spring Boot and Camel 2.17.0
For some reason when I use the following pom.xml configuration I'm getting the following version displaying in the console. Why is 2.15.4 displaying when I'm trying to use 2.17.0. 2016-03-28 19:09:06.452 INFO 14516 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647 2016-03-28 19:09:06.469 INFO 14516 --- [ main] o.a.camel.spring.boot.RoutesCollector: Loading additional Camel XML routes from: classpath:camel/*.xml 2016-03-28 19:09:06.470 INFO 14516 --- [ main] o.a.camel.spring.boot.RoutesCollector: Loading additional Camel XML rests from: classpath:camel-rest/*.xml 2016-03-28 19:09:06.471 INFO 14516 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.15.4 (CamelContext: docker-camel-retry) is starting 2016-03-28 19:09:06.473 INFO 14516 --- [ main] o.a.c.m.ManagedManagementStrategy: JMX is enabled 2016-03-28 19:09:06.635 INFO 14516 --- [ main] o.a.camel.spring.SpringCamelContext : AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as it may improve performance. 2016-03-28 19:09:06.635 INFO 14516 --- [ main] o.a.camel.spring.SpringCamelContext : StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html 2016-03-28 19:09:06.636 INFO 14516 --- [ main] o.a.camel.spring.SpringCamelContext : Total 0 routes, of which 0 is started. 2016-03-28 19:09:06.638 INFO 14516 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.15.4 (CamelContext: docker-camel-retry) started in 0.166 seconds pom.xml http://maven.apache.org/POM/4.0.0"; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd";> 4.0.0 com.zandroid dockercamelretry 0.0.1-SNAPSHOT jar docker-camel-retry Demo project for Spring Boot org.springframework.boot spring-boot-starter-parent 1.3.3.RELEASE UTF-8 1.8 org.springframework.boot spring-boot-starter-web org.springframework.boot spring-boot-starter-actuator org.apache.activemq activemq-camel org.apache.camel camel-spring-boot-starter 2.17.0 org.projectlombok lombok 1.16.4 provided org.springframework.boot spring-boot-starter-test test org.springframework.boot spring-boot-maven-plugin -- View this message in context: http://camel.465427.n5.nabble.com/Spring-Boot-and-Camel-2-17-0-tp5779917.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: camel-Kafka producer re-tries
You can have errors at multiple levels. I'm not sure on which one you only want to retry once. Kafka will retry to some degree always, even if you don't have retries in Kafka configured. When the connection breaks the retries value is needed to still send the message. By using Camel you have yet another level at which a retry can be instantiated. I think when you look closely to the log, and which level was the cause of the error, you should be able to figure out whether you need to change the Kafka or Camel settings. On Mon, Mar 28, 2016, 17:55 kumar5 wrote: > I have configured camel-Kafka producer (camel 2.16.2) topic(USER_REQ_TOPIC) > if any exception throws then only once it has to re-tries but in loges > shows 5 tries and at end it says only one rites it did. I want Kafka has > to re-tries only once if any exception happens. Can you please help me to > understand and fix this issue. > > > > > uri="kafka:1.0.10.22:9092?topic=SER_REQ_TOPIC&messageSendMaxRetries=1" > /> > > > > Brief exception > //kafkaProducer] kafka.client.ClientUtils$: Fetching topic > metadata with correlation id 0 for topics [Set(USER_REQ_TOPIC)] from broker > [id:0,host:1.0.10.22,port:9092] failed > //kafkaProducer] kafka.client.ClientUtils$: Fetching topic > metadata with correlation id 1 for topics [Set(USER_REQ_TOPIC)] from broker > [id:0,host:1.0.10.22,port:9092] failed > //kafkaProducer] kafka.client.ClientUtils$: Fetching topic > metadata with correlation id 2 for topics [Set(USER_REQ_TOPIC)] from broker > [id:0,host:1.0.10.22,port:9092] failed > //kafkaProducer] kafka.client.ClientUtils$: Fetching topic > metadata with correlation id 3 for topics [Set(USER_REQ_TOPIC)] from broker > [id:0,host:1.0.10.22,port:9092] failed > //kafkaProducer] kafka.client.ClientUtils$: Fetching topic > metadata with correlation id 4 for topics [Set(USER_REQ_TOPIC)] from broker > [id:0,host:1.0.10.22,port:9092] failed > > > kafka.common.FailedToSendMessageException: Failed to send messages after 1 > tries. > at > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) > ~[kafka_2.11-0.8.2.2.jar:na] > at kafka.producer.Producer.send(Producer.scala:77) > ~[kafka_2.11-0.8.2.2.jar:na] > at kafka.javaapi.producer.Producer.send(Producer.scala:33) > ~[kafka_2.11-0.8.2.2.jar:na] > > > > complete stack trace exception details : > > -28 09:33:27.695 WARN > [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- > [//kafkaProducer] o.a.camel.component.kafka.KafkaProducer : No message key > or partition key set > 2016-03-28 09:33:27.889 INFO > [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- > [//kafkaProducer] kafka.client.ClientUtils$: Fetching > metadata from broker id:0,host:1.0.10.22,port:9092 with correlation id 0 > for > 1 topic(s) Set(USER_REQ_TOPIC) > 2016-03-28 09:33:28.908 INFO > [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- > [//kafkaProducer] kafka.producer.SyncProducer : Connected to > 1.0.10.22:9092 for producing > 2016-03-28 09:33:28.909 INFO > [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- > [//kafkaProducer] kafka.producer.SyncProducer : Disconnecting > from 1.0.10.22:9092 > 2016-03-28 09:33:28.915 WARN > [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- > [//kafkaProducer] kafka.client.ClientUtils$: Fetching topic > metadata with correlation id 0 for topics [Set(USER_REQ_TOPIC)] from broker > [id:0,host:1.0.10.22,port:9092] failed > > java.nio.channels.ClosedChannelException: null > at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) > ~[kafka_2.11-0.8.2.2.jar:na] > at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) > ~[kafka_2.11-0.8.2.2.jar:na] > at > > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) > ~[kafka_2.11-0.8.2.2.jar:na] > at kafka.producer.SyncProducer.send(SyncProducer.scala:113) > ~[kafka_2.11-0.8.2.2.jar:na] > at > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) > ~[kafka_2.11-0.8.2.2.jar:na] > at > kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) > [kafka_2.11-0.8.2.2.jar:na] > at > > kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67) > [kafka_2.11-0.8.2.2.jar:na] > at kafka.utils.Utils$.swallow(Utils.scala:172) > [kafka_2.11-0.8.2.2.jar:na] > at kafka.utils.Logging$class.swallowError(Logging.scala:106) > [kafka_2.11-0.8.2.2.jar:na] > at kafka.utils.Utils$.swallowError(Utils.scala:45) > [kafka_2.11-0.8.2.2.jar:na] > at > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67) > [kafka_2.11-0.8.2.2.jar:na] > at kafka.producer.Produce
Re: Ldap query exception
Please share your route. > Am 28.03.2016 um 18:39 schrieb rburdet : > > Hi, im working on an application that needs to make a query to a ldap server. > > My bean is : > scope="prototype"> > > > value="com.sun.jndi.ldap.LdapCtxFactory"/> > > value="${LDAP_AUTHENTICATION}" /> > value="${LDAP_PRINCIPAL}"/> > value="${LDAP_CREDENTIALS}"/> > > > > > > The query im trying to achieve is: >String queryLdap = > "(&(objectClass=user)(sAMAccountName="+userName+")(memberOf="+OU+"))"; > > My ldap component in camel route : > > > And the stacktrace of the exception is at the bottom. > > I cannot find any clues to this issue. > Any help is appreciated. > > > Stacktrace: > --- > javax.naming.PartialResultException [Root exception is > javax.naming.NotContextException: Cannot create context for: > ldap://DomainDnsZones.swm.com.ar/DC=DomainDnsZones,DC=swm,DC=com,DC=ar; > remaining name 'DC=swm,DC=com,DC=ar'] >at > com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreImpl(LdapNamingEnumeration.java:242)[:1.7.0_71] >at > com.sun.jndi.ldap.LdapNamingEnumeration.hasMore(LdapNamingEnumeration.java:189)[:1.7.0_71] >at > org.apache.camel.component.ldap.LdapProducer.simpleSearch(LdapProducer.java:101)[857:org.apache.camel.camel-ldap:2.12.0.redhat-610379] >at > org.apache.camel.component.ldap.LdapProducer.process(LdapProducer.java:71)[857:org.apache.camel.camel-ldap:2.12.0.redhat-610379] >at > org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.processor.SendProcessor.process(SendProcessor.java:110)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:398)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.processor.Pipeline.process(Pipeline.java:118)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.processor.Pipeline.process(Pipeline.java:80)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] >at > org.apache.camel.component.cxf.jaxrs.CxfRsInvoker.asyncInvoke(CxfRsInvoker.java:90)[214:org.apache.camel.camel-cxf:2.12.0.redhat-610379] >at > org.apache.camel.component.cxf.jaxrs.CxfRsInvoker.performInvocation(CxfRsInvoker.java:57)[214:org.apache.camel.camel-cxf:2.12.0.redhat-610379] >at > org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:104)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] >at > org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:205)[184:org.apache.cxf.cxf-rt-frontend-jaxrs:2.7.0.redhat-610379] >at > org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:102)[184:org.apache.cxf.cxf-rt-frontend-jaxrs:2.7.0.redhat-610379] >at > org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] >at > org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:94)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] >at > org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] >at > org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] >at > org.apache.cxf.transport.http_jetty.JettyHTTPDestination.serviceRequest(JettyHTTPDestination.java:355)[192:org.apache.cxf.cxf-rt-transports-http-jetty:2.7.0.redhat-610379] >at > org.apache.cxf.transport.http_jetty.JettyHTTPDestination.doService(JettyHTTPDestination.java:319)[192:org.apache.cxf.cxf-rt-transports-http-jetty:2.7.0.redhat-610379] >at > org.apache.cxf.transport.http_jetty.JettyHTTPHandler.handle(JettyHTTPHandler.java:72)[192:org.apache.cxf.cxf-rt-transports-http-jetty:2.7.0.redhat-610379] >at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1088)[101:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] >at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHa
Re: Camel Rest DSL issue with fuse fabric
As it turned out, I was not using the correct URL. The "contextPath" is not part of actual URL to be used. So in this example, i was trying to access rest service as http://localhost:8181/optima/test/123 and I was getting "Resource not found" error. The correct URL is http://localhost:8181/test/123. Solution: Ignore contextPath of restConfiguration in actual rest URL, is this is a bug?Thanks!Gagan -- View this message in context: http://camel.465427.n5.nabble.com/Camel-Rest-DSL-issue-with-fuse-fabric-tp5778820p5779906.html Sent from the Camel - Users mailing list archive at Nabble.com.
Ldap query exception
Hi, im working on an application that needs to make a query to a ldap server. My bean is : The query im trying to achieve is: String queryLdap = "(&(objectClass=user)(sAMAccountName="+userName+")(memberOf="+OU+"))"; My ldap component in camel route : And the stacktrace of the exception is at the bottom. I cannot find any clues to this issue. Any help is appreciated. Stacktrace: --- javax.naming.PartialResultException [Root exception is javax.naming.NotContextException: Cannot create context for: ldap://DomainDnsZones.swm.com.ar/DC=DomainDnsZones,DC=swm,DC=com,DC=ar; remaining name 'DC=swm,DC=com,DC=ar'] at com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreImpl(LdapNamingEnumeration.java:242)[:1.7.0_71] at com.sun.jndi.ldap.LdapNamingEnumeration.hasMore(LdapNamingEnumeration.java:189)[:1.7.0_71] at org.apache.camel.component.ldap.LdapProducer.simpleSearch(LdapProducer.java:101)[857:org.apache.camel.camel-ldap:2.12.0.redhat-610379] at org.apache.camel.component.ldap.LdapProducer.process(LdapProducer.java:71)[857:org.apache.camel.camel-ldap:2.12.0.redhat-610379] at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:110)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:398)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.processor.Pipeline.process(Pipeline.java:118)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[151:org.apache.camel.camel-core:2.12.0.redhat-610379] at org.apache.camel.component.cxf.jaxrs.CxfRsInvoker.asyncInvoke(CxfRsInvoker.java:90)[214:org.apache.camel.camel-cxf:2.12.0.redhat-610379] at org.apache.camel.component.cxf.jaxrs.CxfRsInvoker.performInvocation(CxfRsInvoker.java:57)[214:org.apache.camel.camel-cxf:2.12.0.redhat-610379] at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:104)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:205)[184:org.apache.cxf.cxf-rt-frontend-jaxrs:2.7.0.redhat-610379] at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:102)[184:org.apache.cxf.cxf-rt-frontend-jaxrs:2.7.0.redhat-610379] at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:94)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)[173:org.apache.cxf.cxf-api:2.7.0.redhat-610379] at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.serviceRequest(JettyHTTPDestination.java:355)[192:org.apache.cxf.cxf-rt-transports-http-jetty:2.7.0.redhat-610379] at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.doService(JettyHTTPDestination.java:319)[192:org.apache.cxf.cxf-rt-transports-http-jetty:2.7.0.redhat-610379] at org.apache.cxf.transport.http_jetty.JettyHTTPHandler.handle(JettyHTTPHandler.java:72)[192:org.apache.cxf.cxf-rt-transports-http-jetty:2.7.0.redhat-610379] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1088)[101:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1024)[101:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)[101:org.eclipse.jetty.aggregate.je
camel-Kafka producer re-tries
I have configured camel-Kafka producer (camel 2.16.2) topic(USER_REQ_TOPIC) if any exception throws then only once it has to re-tries but in loges shows 5 tries and at end it says only one rites it did. I want Kafka has to re-tries only once if any exception happens. Can you please help me to understand and fix this issue. Brief exception //kafkaProducer] kafka.client.ClientUtils$: Fetching topic metadata with correlation id 0 for topics [Set(USER_REQ_TOPIC)] from broker [id:0,host:1.0.10.22,port:9092] failed //kafkaProducer] kafka.client.ClientUtils$: Fetching topic metadata with correlation id 1 for topics [Set(USER_REQ_TOPIC)] from broker [id:0,host:1.0.10.22,port:9092] failed //kafkaProducer] kafka.client.ClientUtils$: Fetching topic metadata with correlation id 2 for topics [Set(USER_REQ_TOPIC)] from broker [id:0,host:1.0.10.22,port:9092] failed //kafkaProducer] kafka.client.ClientUtils$: Fetching topic metadata with correlation id 3 for topics [Set(USER_REQ_TOPIC)] from broker [id:0,host:1.0.10.22,port:9092] failed //kafkaProducer] kafka.client.ClientUtils$: Fetching topic metadata with correlation id 4 for topics [Set(USER_REQ_TOPIC)] from broker [id:0,host:1.0.10.22,port:9092] failed kafka.common.FailedToSendMessageException: Failed to send messages after 1 tries. at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) ~[kafka_2.11-0.8.2.2.jar:na] at kafka.producer.Producer.send(Producer.scala:77) ~[kafka_2.11-0.8.2.2.jar:na] at kafka.javaapi.producer.Producer.send(Producer.scala:33) ~[kafka_2.11-0.8.2.2.jar:na] complete stack trace exception details : -28 09:33:27.695 WARN [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- [//kafkaProducer] o.a.camel.component.kafka.KafkaProducer : No message key or partition key set 2016-03-28 09:33:27.889 INFO [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- [//kafkaProducer] kafka.client.ClientUtils$: Fetching metadata from broker id:0,host:1.0.10.22,port:9092 with correlation id 0 for 1 topic(s) Set(USER_REQ_TOPIC) 2016-03-28 09:33:28.908 INFO [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- [//kafkaProducer] kafka.producer.SyncProducer : Connected to 1.0.10.22:9092 for producing 2016-03-28 09:33:28.909 INFO [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- [//kafkaProducer] kafka.producer.SyncProducer : Disconnecting from 1.0.10.22:9092 2016-03-28 09:33:28.915 WARN [RdluRestService,cd3ce6a80be37e22,cd3ce6a80be37e22,false] 2136 --- [//kafkaProducer] kafka.client.ClientUtils$: Fetching topic metadata with correlation id 0 for topics [Set(USER_REQ_TOPIC)] from broker [id:0,host:1.0.10.22,port:9092] failed java.nio.channels.ClosedChannelException: null at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) ~[kafka_2.11-0.8.2.2.jar:na] at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) ~[kafka_2.11-0.8.2.2.jar:na] at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) ~[kafka_2.11-0.8.2.2.jar:na] at kafka.producer.SyncProducer.send(SyncProducer.scala:113) ~[kafka_2.11-0.8.2.2.jar:na] at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) ~[kafka_2.11-0.8.2.2.jar:na] at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) [kafka_2.11-0.8.2.2.jar:na] at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67) [kafka_2.11-0.8.2.2.jar:na] at kafka.utils.Utils$.swallow(Utils.scala:172) [kafka_2.11-0.8.2.2.jar:na] at kafka.utils.Logging$class.swallowError(Logging.scala:106) [kafka_2.11-0.8.2.2.jar:na] at kafka.utils.Utils$.swallowError(Utils.scala:45) [kafka_2.11-0.8.2.2.jar:na] at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67) [kafka_2.11-0.8.2.2.jar:na] at kafka.producer.Producer.send(Producer.scala:77) [kafka_2.11-0.8.2.2.jar:na] at kafka.javaapi.producer.Producer.send(Producer.scala:33) [kafka_2.11-0.8.2.2.jar:na] at org.apache.camel.component.kafka.KafkaProducer.process(KafkaProducer.java:93) [camel-kafka-2.16.2.jar:2.16.2] at org.apache.camel.impl.InterceptSendToEndpoint$1.process(InterceptSendToEndpoint.java:167) [camel-core-2.16.2.jar:2.16.2] at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141) [camel-core-2.16.2.jar:2.16.2] at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77) [camel-core-2.16.2.jar:2.16.2] at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:163) [camel-co
Re: Best Strategy to process a large number of rows in File
Michelle, There are a number of ways you can do that and it will depend on what is constraining your REST API. Is it limited on the number of concurrent connections? Is it limited to the number of transactions/minute? There are at least two components you'll want after the JMS queue. One will be a thread pool limiting/defining the number of worker threads. The second will be a Throttler EIP. http://camel.apache.org/throttler.html -- View this message in context: http://camel.465427.n5.nabble.com/Best-Strategy-to-process-a-large-number-of-rows-in-File-tp5779856p5779901.html Sent from the Camel - Users mailing list archive at Nabble.com.
Camel dependency query
Hello All, I have a query on Camel dependency jars. Does Camel loads all jars or invokes required classes for each transaction or it loads only when jar is started and then caches these jars/classes? So we are planning to put all dependency jars at a NAS location so that all server always use same version of jars from that mounted NAS. So just thinking if it loads jars at each transaction we will be burdening NAS and if it loads only once, we should be good. Vanshul
help on cxfrs client trigger
Hi, I have rsServer and rsClient defined in my camel context, http://localhost:9001/"; loggingFeatureEnabled="true" loggingSizeLimit="-1"> http://localhost:9001/";> Now, I would like to trigger this client every hour and fetch the reponse from it, I can see various component available like, scheduler, quartz and timer. Please help me with the right component I should use in my case. -- View this message in context: http://camel.465427.n5.nabble.com/help-on-cxfrs-client-trigger-tp5779888.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: Netty bi directional communication
CORRECTION On 28 March 2016 at 16:24, Jagannath Naidu < jagannath.na...@fosteringlinux.com> wrote: > Hi List, > > I am again stuck in a situation. :-D > > ENVIRONMENT > > I am using camel 2.15.x and can use this one only > > we have a server which works as follows > 1. This server will multiple requests > 1. This server can handle multiple requests > 2. It will not respond in same tcp session ( means, it will create a new > connection every time ) > 3. It listens on socket 10.1.1.10:1234 > 4. Result (Response) messages from this server are asynchronous ( Means, > it is possible that it will pass result messages to the client connecting > in any order ) > 5. It will only pass result(Response) messages to the socket(source > socket from client) from which from it received request. > > PROBLEM > 1. I can only pass tcp stream to server. For this I am using netty > 2. I can send request using netty as a producer (It will use say a socket > 9.9.9.9:4321 to connect to server 10.1.1.10:1234) > 3. It is InOnly, means there will be no response, as the server will take > time about 30 seconds > 4. after server processing the request, it will connect to the client > (which is now has to be a server listening on the same socket 9.9.9.9:4321). > IS IT POSSIBLE ? > 5. Now on listening, this result (Response) message has to be stored in a > queue > > As the following link says, there is a clientMode which sounds like a > solution > > http://camel.465427.n5.nabble.com/Bi-directional-comms-on-TCP-connection-td5765782.html > > 1. Is this even possible using camel netty only ? How can a consumer just > turn to a producer > dynamically. > How can a producer turn into a consumer or vise versa > 2. If so how ? I mean, could find enough evidence or document. > > -- > Thanks & Regards > > Jagannath Naidu > Keen & Able Computers Pvt. Ltd. > -- Thanks & Regards Jagannath Naidu Keen & Able Computers Pvt. Ltd.
Netty bi directional communication
Hi List, I am again stuck in a situation. :-D ENVIRONMENT I am using camel 2.15.x and can use this one only we have a server which works as follows 1. This server will multiple requests 2. It will not respond in same tcp session ( means, it will create a new connection every time ) 3. It listens on socket 10.1.1.10:1234 4. Result (Response) messages from this server are asynchronous ( Means, it is possible that it will pass result messages to the client connecting in any order ) 5. It will only pass result(Response) messages to the socket(source socket from client) from which from it received request. PROBLEM 1. I can only pass tcp stream to server. For this I am using netty 2. I can send request using netty as a producer (It will use say a socket 9.9.9.9:4321 to connect to server 10.1.1.10:1234) 3. It is InOnly, means there will be no response, as the server will take time about 30 seconds 4. after server processing the request, it will connect to the client (which is now has to be a server listening on the same socket 9.9.9.9:4321). IS IT POSSIBLE ? 5. Now on listening, this result (Response) message has to be stored in a queue As the following link says, there is a clientMode which sounds like a solution http://camel.465427.n5.nabble.com/Bi-directional-comms-on-TCP-connection-td5765782.html 1. Is this even possible using camel netty only ? How can a consumer just turn to a producer dynamically. 2. If so how ? I mean, could find enough evidence or document. -- Thanks & Regards Jagannath Naidu Keen & Able Computers Pvt. Ltd.
Best Strategy to process a large number of rows in File
Hi everyone, a business requirement of my project is to process a file from directory, split and store each single row in Active MQ. Then a consumer will consume the queue invoking a Rest Service by jetty client. So, there is a little problem... when the file csv has many lines (~5 rows) to process. I already optimized my route following this topic http://www.davsclaus.com/2011/11/splitting-big-xml-files-with-apache.html in order to process quickly the file with split and store in AMQ. But, now i need to process messages stored in AMQ slowly in order to don't overload a Rest Service Interface... Ex. n messages at time each five minutes Could you please provide me the best strategy to do it or example or suggestions to study? I'm seeing ThrottlingInflightRoutePolicy and AMQ Parameter Configuration but i don't know if this is right way. I work with JBossFuse 6.2.0 (Camel 2.15.1 and Active MQ 5.11) Thanks in advance Best Regards Michele -- View this message in context: http://camel.465427.n5.nabble.com/Best-Strategy-to-process-a-large-number-of-rows-in-File-tp5779856.html Sent from the Camel - Users mailing list archive at Nabble.com.
Use same db connection
Hi. I'm using Camel SqlComponent, and I have a problem. I want to get a last_insert_id, and my route is here. TEST_TBL has auto_increment primary key. I can't get a id that I expected in this route, because "INSERT" and "SELECT LAST_INSERT_ID()" are on different endpoints, in other words different connections. How can I get id certainly I expected? Thank you for any help you can provide. -- View this message in context: http://camel.465427.n5.nabble.com/Use-same-db-connection-tp5779853.html Sent from the Camel - Users mailing list archive at Nabble.com.
Re: [CAMEL-JETTY] How to manage filter-mapping?
I’m afraid you have to define the routes twice if you want to setup different filters according to the path. -- Willem Jiang Blog: http://willemjiang.blogspot.com (English) http://jnn.iteye.com (Chinese) Twitter: willemjiang Weibo: 姜宁willem On March 28, 2016 at 3:49:55 PM, Charlee Chitsuk (charlee...@gmail.com) wrote: > Hi Willem, > > Thank you very much for your reply. > > If I have more than one resources, e.g "/myweb/foo" and "/myweb/bar". I > should define the route twice, right? > > from("jetty:http://0.0.0.0:8080/myweb/foo"; .) > > from("jetty:http://0.0.0.0:8080/myweb/bar"; ..) > > Could you please help to advise further? > > -- > Best Regards, > > Charlee Ch. > > 2016-03-28 14:19 GMT+07:00 Willem Jiang : > > > The filter is applied according to the path of the endpoint, if you just > > want to apply the filter to the “/myweb/foo/*”, you need to setup the jetty > > endpoint like this > > from(“jetty:http://0.0.0.0:8080/myweb/foo” + ...) > > > > -- > > Willem Jiang > > > > > > Blog: http://willemjiang.blogspot.com (English) > > http://jnn.iteye.com (Chinese) > > Twitter: willemjiang > > Weibo: 姜宁willem > > > > > > > > On March 28, 2016 at 9:39:55 AM, Charlee Chitsuk (charlee...@gmail.com) > > wrote: > > > Hi, > > > > > > I'm trying to put the "servlet-filter" to the "camel-jetty" (version > > > 2.17.0). At the moment I config the route as the following: - > > > > > > from("jetty:http://0.0.0.0:8080/myweb"; > > > + "?matchOnUriPrefix=true&" > > > + "filtersRef=my-filter&" > > > + "filterInit.key1=value1") > > > > > > Everything work great, the filter applies for "/myweb/*". Anyhow I would > > > like to apply the filter only for some path, e.g. "/myweb/foo/*". > > > > > > Regarding to the "web.xml", we are able to set the 'filter-mapping' by > > > providing the 'url-pattern', e.g. > > > > > > > > > my-filter > > > /foo/* > > > > > > > > > > > > I'm not sure if there is any configuration via the 'camel-jetty' or not. > > > Could you please help to advise further? > > > > > > -- > > > Best Regards, > > > > > > Charlee Ch. > > > > > > > >
Camel ftp component : JSCH-0.1.44 Vs OpenSSH_6.6.1 issue: Leads to error : com.jcraft.jsch.JSchException: Session.connect: java.io.IOException: End of IO Stream Read
We are facing below mentioned error once our client applied latest changes for SSH. Is this problem with camel-ftp component? Do we need to change to latest camel-ftp component 2.16? Do Fuse/servicemix version (apache-servicemix-4.3.1-fuse-03-01) support camel 2.16? Current Software version: 1. apache-servicemix-4.3.1-fuse-03-01 2. camel-ftp - 2.6.0 Connection String --- 2016-03-21 12:27:01 | DEBUG | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | Using private keyfile: C:/TEST/Fuse/client/SFTP/client_ppk.openssh 2016-03-21 12:27:01 | DEBUG | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | Using StrickHostKeyChecking: no 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> Connecting to xxx.xx.xx.xxx port 22 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> Connection established 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> Remote version string: SSH-2.0-OpenSSH_6.6.1 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> Local version string: SSH-2.0-JSCH-0.1.44 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> CheckCiphers: aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-ctr,arcfour,arcfour128,arcfour256 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> aes256-ctr is not available. 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> aes192-ctr is not available. 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> aes256-cbc is not available. 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> aes192-cbc is not available. 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> arcfour256 is not available. 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> SSH_MSG_KEXINIT sent 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> SSH_MSG_KEXINIT received 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> kex: server->client aes128-ctr hmac-sha1 none 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> kex: client->server aes128-ctr hmac-sha1 none 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> SSH_MSG_KEX_DH_GEX_REQUEST(1024<1024<1024) sent 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH ->* expecting SSH_MSG_KEX_DH_GEX_GROUP* 2016-03-21 12:27:01 | INFO | tenerContainer-1 | SftpOperations | 207 - org.apache.camel.camel-ftp - 2.6.0.fuse-01-09 | JSCH -> Disconnecting from xxx.xx.xx.xxx port 22 Error Stack trace: 2016-03-21 12:27:02 | ERROR | tenerContainer-1 | EndpointMessageListener | 68 - org.apache.camel.camel-core - 2.6.0.fuse-01-09 | Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot connect to sftp://a...@xxx.xx.xx.xxx:22 org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://a...@xxx.xx.xx.xxx:22 at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:113)[207:org.apache.camel.camel-ftp:2.6.0.fuse-01-09] at org.apache.camel.component.file.remote.RemoteFileProducer.connectIfNecessary(RemoteFileProducer.java:199)[207:org.apache.camel.camel-ftp:2.6.0.fuse-01-09] at org.apache.camel.component.file.remote.RemoteFileProducer.recoverableConnectIfNecessary(RemoteFileProducer.java:189)[207:org.apache.camel.camel-ftp:2.6.0.fuse-01-09] at org.apache.camel.component.file.remote.RemoteFileProducer.preWriteCheck(RemoteFileProducer.java:117)[207:org.apache.camel.camel-ftp:2.6.0.fuse-01-09]
Re: [CAMEL-JETTY] How to manage filter-mapping?
Hi Willem, Thank you very much for your reply. If I have more than one resources, e.g "/myweb/foo" and "/myweb/bar". I should define the route twice, right? from("jetty:http://0.0.0.0:8080/myweb/foo"; .) from("jetty:http://0.0.0.0:8080/myweb/bar"; ..) Could you please help to advise further? -- Best Regards, Charlee Ch. 2016-03-28 14:19 GMT+07:00 Willem Jiang : > The filter is applied according to the path of the endpoint, if you just > want to apply the filter to the “/myweb/foo/*”, you need to setup the jetty > endpoint like this > from(“jetty:http://0.0.0.0:8080/myweb/foo” + ...) > > -- > Willem Jiang > > > Blog: http://willemjiang.blogspot.com (English) > http://jnn.iteye.com (Chinese) > Twitter: willemjiang > Weibo: 姜宁willem > > > > On March 28, 2016 at 9:39:55 AM, Charlee Chitsuk (charlee...@gmail.com) > wrote: > > Hi, > > > > I'm trying to put the "servlet-filter" to the "camel-jetty" (version > > 2.17.0). At the moment I config the route as the following: - > > > > from("jetty:http://0.0.0.0:8080/myweb"; > > + "?matchOnUriPrefix=true&" > > + "filtersRef=my-filter&" > > + "filterInit.key1=value1") > > > > Everything work great, the filter applies for "/myweb/*". Anyhow I would > > like to apply the filter only for some path, e.g. "/myweb/foo/*". > > > > Regarding to the "web.xml", we are able to set the 'filter-mapping' by > > providing the 'url-pattern', e.g. > > > > > > my-filter > > /foo/* > > > > > > > > I'm not sure if there is any configuration via the 'camel-jetty' or not. > > Could you please help to advise further? > > > > -- > > Best Regards, > > > > Charlee Ch. > > > >
Re: [CAMEL-JETTY] How to manage filter-mapping?
The filter is applied according to the path of the endpoint, if you just want to apply the filter to the “/myweb/foo/*”, you need to setup the jetty endpoint like this from(“jetty:http://0.0.0.0:8080/myweb/foo” + ...) -- Willem Jiang Blog: http://willemjiang.blogspot.com (English) http://jnn.iteye.com (Chinese) Twitter: willemjiang Weibo: 姜宁willem On March 28, 2016 at 9:39:55 AM, Charlee Chitsuk (charlee...@gmail.com) wrote: > Hi, > > I'm trying to put the "servlet-filter" to the "camel-jetty" (version > 2.17.0). At the moment I config the route as the following: - > > from("jetty:http://0.0.0.0:8080/myweb"; > + "?matchOnUriPrefix=true&" > + "filtersRef=my-filter&" > + "filterInit.key1=value1") > > Everything work great, the filter applies for "/myweb/*". Anyhow I would > like to apply the filter only for some path, e.g. "/myweb/foo/*". > > Regarding to the "web.xml", we are able to set the 'filter-mapping' by > providing the 'url-pattern', e.g. > > > my-filter > /foo/* > > > > I'm not sure if there is any configuration via the 'camel-jetty' or not. > Could you please help to advise further? > > -- > Best Regards, > > Charlee Ch. >