[jira] [Commented] (CAMEL-9143) Producers that implement the ServicePoolAware interface cause memory leak due to JMX references

2015-09-17 Thread Bob Browning (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791806#comment-14791806
 ] 

Bob Browning commented on CAMEL-9143:
-

Yes in the TC as the intent was to simulate RemoteFileXxx it should override 
singleton as RemoteFileProducer does, but it still causes the leak.

Prospect of the fix sounds good, cheers :)

> Producers that implement the ServicePoolAware interface cause memory leak due 
> to JMX references
> ---
>
> Key: CAMEL-9143
> URL: https://issues.apache.org/jira/browse/CAMEL-9143
> Project: Camel
>  Issue Type: Bug
>  Components: camel-core
>Affects Versions: 2.14.1, 2.14.2, 2.15.0, 2.15.1
>Reporter: Bob Browning
>Assignee: Claus Ibsen
> Fix For: 2.16.0, 2.15.4
>
>
> h4. Description
> Producer instances that implement the ServicePoolAware interface will leak 
> memory if their route is stopped, with new producers being leaked every time 
> the route is started/stopped.
> Known implementations that are affected are RemoteFileProducer (ftp, sftp) 
> and Mina2Producer.
> This is due to the behaviour that the SendProcessor which when the route is 
> stopped it shuts down it's `producerCache` instance.
> {code}
> protected void doStop() throws Exception {
> ServiceHelper.stopServices(producerCache, producer);
> }
> {code}
> this in turn calls `stopAndShutdownService(pool)` which will call stop on the 
> SharedProducerServicePool instance which is a NOOP however it also calls 
> shutdown which effects a stop of the global pool (this stops all the 
> registered services and then clears the pool.
> {code}
> protected void doStop() throws Exception {
> // when stopping we intend to shutdown
> ServiceHelper.stopAndShutdownService(pool);
> try {
> ServiceHelper.stopAndShutdownServices(producers.values());
> } finally {
> // ensure producers are removed, and also from JMX
> for (Producer producer : producers.values()) {
> getCamelContext().removeService(producer);
> }
> }
> producers.clear();
> }
> {code}
> However no call to `context.removeService(Producer) is called for the entries 
> from the pool only those singleton instances that were in the `producers` map 
> hence the JMX `ManagedProducer` that is created when `doGetProducer` invokes 
> {code}getCamelContext().addService(answer, false);
> {code} is never removed. 
> Since the global pool is empty when the next request to get a producer is 
> called a new producer is created, jmx wrapper and all, whilst the old 
> instance remains orphaned retaining any objects that pertain to that instance.
> One workaround is for the producer to call 
> {code}getEndpoint().getCamelContext().removeService(this){code} in it's stop 
> method, however this is fairly obscure and it would probably be better to 
> invoke removal of the producer when it is removed from the shared pool.
> Another issue of note is that when a route is shutdown that contains a 
> SendProcessor due to the shutdown invocation on the 
> SharedProcessorServicePool the global pool is cleared of `everything` and 
> remains in `Stopped` state until another route starts it (although it is 
> still accessed and used whilst in the `Stopped` state).
> h4. Impact
> For general use where there is no dynamic creation or passivation of routes 
> this issue should be minimal, however in our use case where the routes are 
> not static, there is a certain amount of recreation of routes as customer 
> endpoints change and there is a need to passivate idle routes this causes a 
> considerable memory leak (via SFTP in particular).
> h4. Test Case
> {code}
> package org.apache.camel.component;
> import com.google.common.util.concurrent.AtomicLongMap;
> import org.apache.camel.CamelContext;
> import org.apache.camel.Consumer;
> import org.apache.camel.Endpoint;
> import org.apache.camel.Exchange;
> import org.apache.camel.Processor;
> import org.apache.camel.Producer;
> import org.apache.camel.Route;
> import org.apache.camel.Service;
> import org.apache.camel.ServicePoolAware;
> import org.apache.camel.ServiceStatus;
> import org.apache.camel.builder.RouteBuilder;
> import org.apache.camel.impl.DefaultComponent;
> import org.apache.camel.impl.DefaultEndpoint;
> import org.apache.camel.impl.DefaultProducer;
> import org.apache.camel.support.LifecycleStrategySupport;
> import org.apache.camel.support.ServiceSupport;
> import org.apache.camel.test.junit4.CamelTestSupport;
> import org.junit.Test;
> import java.util.Map;
> import static com.google.common.base.Preconditions.checkNotNull;
> /**
>  * Test memory behaviour of producers using {@link ServicePoolAware} when 

[jira] [Updated] (CAMEL-9143) Producers that implement the ServicePoolAware interface cause memory leak due to JMX references

2015-09-16 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-9143:

Description: 
h4. Description

Producer instances that implement the ServicePoolAware interface will leak 
memory if their route is stopped, with new producers being leaked every time 
the route is started/stopped.

Known implementations that are affected are RemoteFileProducer (ftp, sftp) and 
Mina2Producer.

This is due to the behaviour that the SendProcessor which when the route is 
stopped it shuts down it's `producerCache` instance.

{code}
protected void doStop() throws Exception {
ServiceHelper.stopServices(producerCache, producer);
}
{code}

this in turn calls `stopAndShutdownService(pool)` which will call stop on the 
SharedProducerServicePool instance which is a NOOP however it also calls 
shutdown which effects a stop of the global pool (this stops all the registered 
services and then clears the pool.

{code}
protected void doStop() throws Exception {
// when stopping we intend to shutdown
ServiceHelper.stopAndShutdownService(pool);
try {
ServiceHelper.stopAndShutdownServices(producers.values());
} finally {
// ensure producers are removed, and also from JMX
for (Producer producer : producers.values()) {
getCamelContext().removeService(producer);
}
}
producers.clear();
}
{code}

However no call to `context.removeService(Producer) is called for the entries 
from the pool only those singleton instances that were in the `producers` map 
hence the JMX `ManagedProducer` that is created when `doGetProducer` invokes 
{code}getCamelContext().addService(answer, false);
{code} is never removed. 

Since the global pool is empty when the next request to get a producer is 
called a new producer is created, jmx wrapper and all, whilst the old instance 
remains orphaned retaining any objects that pertain to that instance.

One workaround is for the producer to call 
{code}getEndpoint().getCamelContext().removeService(this){code} in it's stop 
method, however this is fairly obscure and it would probably be better to 
invoke removal of the producer when it is removed from the shared pool.

Another issue of note is that when a route is shutdown that contains a 
SendProcessor due to the shutdown invocation on the SharedProcessorServicePool 
the global pool is cleared of `everything` and remains in `Stopped` state until 
another route starts it (although it is still accessed and used whilst in the 
`Stopped` state).

h4. Impact

For general use where there is no dynamic creation or passivation of routes 
this issue should be minimal, however in our use case where the routes are not 
static, there is a certain amount of recreation of routes as customer endpoints 
change and there is a need to passivate idle routes this causes a considerable 
memory leak (via SFTP in particular).

h4. Test Case
{code}
package org.apache.camel.component;

import com.google.common.util.concurrent.AtomicLongMap;

import org.apache.camel.CamelContext;
import org.apache.camel.Consumer;
import org.apache.camel.Endpoint;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.Producer;
import org.apache.camel.Route;
import org.apache.camel.Service;
import org.apache.camel.ServicePoolAware;
import org.apache.camel.ServiceStatus;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultComponent;
import org.apache.camel.impl.DefaultEndpoint;
import org.apache.camel.impl.DefaultProducer;
import org.apache.camel.support.LifecycleStrategySupport;
import org.apache.camel.support.ServiceSupport;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.junit.Test;

import java.util.Map;

import static com.google.common.base.Preconditions.checkNotNull;

/**
 * Test memory behaviour of producers using {@link ServicePoolAware} when using 
JMX.
 */
public class ServicePoolAwareLeakyTest extends CamelTestSupport {

  private static final String LEAKY_SIEVE_STABLE = 
"leaky://sieve-stable?plugged=true";
  private static final String LEAKY_SIEVE_TRANSIENT = 
"leaky://sieve-transient?plugged=true";


  private static boolean isPatchApplied() {
return Boolean.parseBoolean(System.getProperty("patchApplied", "false"));
  }

  /**
   * Component that provides leaks producers.
   */
  private static class LeakySieveComponent extends DefaultComponent {
@Override
protected Endpoint createEndpoint(String uri, String remaining, Map parameters) throws Exception {
  boolean plugged = "true".equalsIgnoreCase((String) 
parameters.remove("plugged"));
  return new LeakySieveEndpoint(uri, isPatchApplied() && plugged);
}
  }

  /**
   * Endpoint that provides leaky producers.
   */
  private static class 

[jira] [Updated] (CAMEL-9143) Producers that implement the ServicePoolAware interface cause memory leak due to JMX references

2015-09-16 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-9143:

Affects Version/s: (was: 2.14.3)

> Producers that implement the ServicePoolAware interface cause memory leak due 
> to JMX references
> ---
>
> Key: CAMEL-9143
> URL: https://issues.apache.org/jira/browse/CAMEL-9143
> Project: Camel
>  Issue Type: Bug
>  Components: camel-core
>Affects Versions: 2.14.1, 2.14.2, 2.15.0, 2.15.1
>Reporter: Bob Browning
>
> h4. Description
> Producer instances that implement the ServicePoolAware interface will leak 
> memory if their route is stopped, with new producers being leaked every time 
> the route is started/stopped.
> Known implementations that are affected are RemoteFileProducer (ftp, sftp) 
> and Mina2Producer.
> This is due to the behaviour that the SendProcessor which when the route is 
> stopped it shuts down it's `producerCache` instance.
> {code}
> protected void doStop() throws Exception {
> ServiceHelper.stopServices(producerCache, producer);
> }
> {code}
> this in turn calls `stopAndShutdownService(pool)` which will call stop on the 
> SharedProducerServicePool instance which is a NOOP however it also calls 
> shutdown which effects a stop of the global pool (this stops all the 
> registered services and then clears the pool.
> {code}
> protected void doStop() throws Exception {
> // when stopping we intend to shutdown
> ServiceHelper.stopAndShutdownService(pool);
> try {
> ServiceHelper.stopAndShutdownServices(producers.values());
> } finally {
> // ensure producers are removed, and also from JMX
> for (Producer producer : producers.values()) {
> getCamelContext().removeService(producer);
> }
> }
> producers.clear();
> }
> {code}
> However no call to `context.removeService(Producer) is called for the entries 
> from the pool only those singleton instances that were in the `producers` map 
> hence the JMX `ManagedProducer` that is created when `doGetProducer` invokes 
> {code}getCamelContext().addService(answer, false);
> {code} is never removed. 
> Since the global pool is empty when the next request to get a producer is 
> called a new producer is created, jmx wrapper and all, whilst the old 
> instance remains orphaned retaining any objects that pertain to that instance.
> One workaround is for the producer to call 
> {code}getEndpoint().getCamelContext().removeService(this){code} in it's stop 
> method, however this is fairly obscure and it would probably be better to 
> invoke removal of the producer when it is removed from the shared pool.
> Another issue of note is that when a route is shutdown that contains a 
> SendProcessor due to the shutdown invocation on the 
> SharedProcessorServicePool the global pool is cleared of `everything` and 
> remains in `Stopped` state until another route starts it (although it is 
> still accessed and used whilst in the `Stopped` state).
> h4. Impact
> For general use where there is no dynamic creation or passivation of routes 
> this issue should be minimal, however in our use case where the routes are 
> not static, there is a certain amount of recreation of routes as customer 
> endpoints change and there is a need to passivate idle routes this causes a 
> considerable memory leak (via SFTP in particular).
> h4. Test Case
> {code}
> package org.apache.camel.component;
> import com.google.common.util.concurrent.AtomicLongMap;
> import org.apache.camel.CamelContext;
> import org.apache.camel.Consumer;
> import org.apache.camel.Endpoint;
> import org.apache.camel.Exchange;
> import org.apache.camel.Processor;
> import org.apache.camel.Producer;
> import org.apache.camel.Route;
> import org.apache.camel.Service;
> import org.apache.camel.ServicePoolAware;
> import org.apache.camel.ServiceStatus;
> import org.apache.camel.builder.RouteBuilder;
> import org.apache.camel.impl.DefaultComponent;
> import org.apache.camel.impl.DefaultEndpoint;
> import org.apache.camel.impl.DefaultProducer;
> import org.apache.camel.support.LifecycleStrategySupport;
> import org.apache.camel.support.ServiceSupport;
> import org.apache.camel.test.junit4.CamelTestSupport;
> import org.junit.Test;
> import java.util.Map;
> import static com.google.common.base.Preconditions.checkNotNull;
> /**
>  * Test memory behaviour of producers using {@link ServicePoolAware} when 
> using JMX.
>  */
> public class ServicePoolAwareLeakyTest extends CamelTestSupport {
>   private static final String LEAKY_SIEVE_STABLE = 
> "leaky://sieve-stable?plugged=true";
>   private static final String LEAKY_SIEVE_TRANSIENT = 
> 

[jira] [Created] (CAMEL-9143) Producers that implement the ServicePoolAware interface cause memory leak due to JMX references

2015-09-16 Thread Bob Browning (JIRA)
Bob Browning created CAMEL-9143:
---

 Summary: Producers that implement the ServicePoolAware interface 
cause memory leak due to JMX references
 Key: CAMEL-9143
 URL: https://issues.apache.org/jira/browse/CAMEL-9143
 Project: Camel
  Issue Type: Bug
  Components: camel-core
Affects Versions: 2.15.1, 2.15.0, 2.14.3, 2.14.2, 2.14.1
Reporter: Bob Browning


h4. Description

Producer instances that implement the ServicePoolAware interface will leak 
memory if their route is stopped, with new producers being leaked every time 
the route is started/stopped.

Known implementations that are affected are RemoteFileProducer (ftp, sftp) and 
Mina2Producer.

This is due to the behaviour that the SendProcessor which when the route is 
stopped it shuts down it's `producerCache` instance.

{code}
protected void doStop() throws Exception {
ServiceHelper.stopServices(producerCache, producer);
}
{code}

this in turn calls `stopAndShutdownService(pool)` which will call stop on the 
SharedProducerServicePool instance which is a NOOP however it also calls 
shutdown which effects a stop of the global pool (this stops all the registered 
services and then clears the pool.

{code}
protected void doStop() throws Exception {
// when stopping we intend to shutdown
ServiceHelper.stopAndShutdownService(pool);
try {
ServiceHelper.stopAndShutdownServices(producers.values());
} finally {
// ensure producers are removed, and also from JMX
for (Producer producer : producers.values()) {
getCamelContext().removeService(producer);
}
}
producers.clear();
}
{code}

However no call to `context.removeService(Producer) is called for the entries 
from the pool only those singleton instances that were in the `producers` map 
hence the JMX `ManagedProducer` that is created when `doGetProducer` invokes 
{code}getCamelContext().addService(answer, false);
{code} is never removed. 

Since the global pool is empty when the next request to get a producer is 
called a new producer is created, jmx wrapper and all, whilst the old instance 
remains orphaned retaining any objects that pertain to that instance.

One workaround is for the producer to call 
{code}getEndpoint().getCamelContext().removeService(this){code} in it's stop 
method, however this is fairly obscure and it would probably be better to 
invoke removal of the producer when it is removed from the shared pool.

Another issue of note is that when a route is shutdown that contains a 
SendProcessor due to the shutdown invocation on the SharedProcessorServicePool 
the global pool is cleared of `everything`.

h4. Impact

For general use where there is no dynamic creation or passivation of routes 
this issue should be minimal, however in our use case where the routes are not 
static, there is a certain amount of recreation of routes as customer endpoints 
change and there is a need to passivate idle routes this causes a considerable 
memory leak (via SFTP in particular).

h4. Test Case
{code}
package org.apache.camel.component;

import com.google.common.util.concurrent.AtomicLongMap;

import org.apache.camel.CamelContext;
import org.apache.camel.Consumer;
import org.apache.camel.Endpoint;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.Producer;
import org.apache.camel.Route;
import org.apache.camel.Service;
import org.apache.camel.ServicePoolAware;
import org.apache.camel.ServiceStatus;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultComponent;
import org.apache.camel.impl.DefaultEndpoint;
import org.apache.camel.impl.DefaultProducer;
import org.apache.camel.support.LifecycleStrategySupport;
import org.apache.camel.support.ServiceSupport;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.junit.Test;

import java.util.Map;

import static com.google.common.base.Preconditions.checkNotNull;

/**
 * Test memory behaviour of producers using {@link ServicePoolAware} when using 
JMX.
 */
public class ServicePoolAwareLeakyTest extends CamelTestSupport {

  private static final String LEAKY_SIEVE_STABLE = 
"leaky://sieve-stable?plugged=true";
  private static final String LEAKY_SIEVE_TRANSIENT = 
"leaky://sieve-transient?plugged=true";


  private static boolean isPatchApplied() {
return Boolean.parseBoolean(System.getProperty("patchApplied", "false"));
  }

  /**
   * Component that provides leaks producers.
   */
  private static class LeakySieveComponent extends DefaultComponent {
@Override
protected Endpoint createEndpoint(String uri, String remaining, Map parameters) throws Exception {
  boolean plugged = "true".equalsIgnoreCase((String) 
parameters.remove("plugged"));
  return new LeakySieveEndpoint(uri, 

[jira] [Created] (CAMEL-9013) Camel HTTP no longer supporting chunked transfer encoding with Tomcat

2015-07-24 Thread Bob Browning (JIRA)
Bob Browning created CAMEL-9013:
---

 Summary: Camel HTTP no longer supporting chunked transfer encoding 
with Tomcat
 Key: CAMEL-9013
 URL: https://issues.apache.org/jira/browse/CAMEL-9013
 Project: Camel
  Issue Type: Bug
  Components: camel-http, camel-servlet
Affects Versions: 2.15.2, 2.15.1, 2.15.0
Reporter: Bob Browning


When sending a chunked POST whilst running the servlet under Tomcat, camel now 
fails to read the input stream and sets the body to null.

[chunked-http-failure-test|https://github.com/ukcrpb6/chunked-http-failure-test]

This is due to camel checking the stream for available bytes introduced in 
CAMEL-5806. For whatever reason the CoyoteInputStream is returning 0 available 
bytes when handling a chunked request.

{code}
if (len  0) {
InputStream is = request.getInputStream();
if (is.available() == 0) {
// no data so return null
return null;
}
}
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CAMEL-8088) FTP can wait indefinitely when connection timeout occurs during connect

2014-11-27 Thread Bob Browning (JIRA)
Bob Browning created CAMEL-8088:
---

 Summary: FTP can wait indefinitely when connection timeout occurs 
during connect
 Key: CAMEL-8088
 URL: https://issues.apache.org/jira/browse/CAMEL-8088
 Project: Camel
  Issue Type: Bug
  Components: camel-ftp
Affects Versions: 2.13.3
Reporter: Bob Browning
Priority: Minor


In our production system we have seen cases where the FTP thread is waiting for 
a response indefinitely despite having set _soTimeout_ on the connection. On 
investigation this is due to a condition that can occur where a socket is able 
to connect yet a firewall or the ilk then blocks further traffic.

This can be over come by setting the property _ftpClient.defaultTimeout_ to a 
non-zero value.

It should be the case where if upon initial socket connection no response 
occurs that the socket should be deemed dead, however this is not the case.

When the following exception is thrown during initial connect to an FTP server, 
after the socket has connected but whilst awaiting the inital reply it can 
leave the RemoteFileProducer in a state where it is connected but not logged in 
and no attempt reconnect is attempted, if the soTimeout as set by 
_ftpClient.defaultTimeout_ is set to zero then it can cause a subsequent 
command will wait for a reply indefinitely.

{pre}
Caused by: java.io.IOException: Timed out waiting for initial connect reply
at org.apache.commons.net.ftp.FTP._connectAction_(FTP.java:389) 
~[commons-net-3.1.jar:3.1]
at 
org.apache.commons.net.ftp.FTPClient._connectAction_(FTPClient.java:796) 
~[commons-net-3.1.jar:3.1]
at org.apache.commons.net.SocketClient.connect(SocketClient.java:172) 
~[commons-net-3.1.jar:3.1]
at org.apache.commons.net.SocketClient.connect(SocketClient.java:192) 
~[commons-net-3.1.jar:3.1]
at 
org.apache.camel.component.file.remote.FtpOperations.connect(FtpOperations.java:95)
 ~[camel-ftp-2.13.1.jar:2.13.1]
{pre}

The RemoteFileProducer will enter this block as the loggedIn state has not yet 
been reached, however the existing broken socket is reused.

{code}
// recover by re-creating operations which should most likely be able 
to recover
if (!loggedIn) {
log.debug(Trying to recover connection to: {} with a fresh 
client., getEndpoint());
setOperations(getEndpoint().createRemoteFileOperations());
connectIfNecessary();
}
{code}

Yet the _connectIfNecessary()_ method will return immediately since the check 
condition is based on socket connection and takes no account of whether login 
was achieved so the 'dead' socket is reused.

{code}
protected void connectIfNecessary() throws 
GenericFileOperationFailedException {
// This will be skipped when loggedIn = false and the socket is 
connected
if (!getOperations().isConnected()) {
log.debug(Not already connected/logged in. Connecting to: {}, 
getEndpoint());
RemoteFileConfiguration config = getEndpoint().getConfiguration();
loggedIn = getOperations().connect(config);
if (!loggedIn) {
return;
}
log.info(Connected and logged in to:  + getEndpoint());
}
}
{code}

A dirty test that blocks of this blocking condition:

{code}
package ftp;

import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.JndiRegistry;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.apache.commons.net.ftp.FTPClient;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.mockftpserver.fake.FakeFtpServer;
import org.mockito.Mockito;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;

import java.io.IOException;
import java.io.InputStream;
import java.net.Socket;
import java.net.SocketException;
import java.net.SocketTimeoutException;
import java.util.concurrent.atomic.AtomicBoolean;

import javax.net.SocketFactory;

import static org.mockito.Matchers.anyInt;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

public class FtpInitialConnectTimeoutTest extends CamelTestSupport {

  private static final int CONNECT_TIMEOUT = 11223;

  /**
   * Create the answer for the socket factory that causes a 
SocketTimeoutException to occur in connect.
   */
  private static class SocketAnswer implements AnswerSocket {
@Override
public Socket answer(InvocationOnMock invocation) throws Throwable {
  final Socket socket = Mockito.spy(new Socket());
  final AtomicBoolean timeout = new AtomicBoolean();

  try {
doAnswer(new AnswerInputStream() {
  @Override
  public InputStream answer(InvocationOnMock invocation) throws 
Throwable {
final InputStream stream = (InputStream) 

[jira] [Updated] (CAMEL-8088) FTP can wait indefinitely when connection timeout occurs during connect

2014-11-27 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-8088:

Description: 
In our production system we have seen cases where the FTP thread is waiting for 
a response indefinitely despite having set _soTimeout_ on the connection. On 
investigation this is due to a condition that can occur where a socket is able 
to connect yet a firewall or the ilk then blocks further traffic.

This can be over come by setting the property _ftpClient.defaultTimeout_ to a 
non-zero value.

It should be the case where if upon initial socket connection no response 
occurs that the socket should be deemed dead, however this is not the case.

When the following exception is thrown during initial connect to an FTP server, 
after the socket has connected but whilst awaiting the inital reply it can 
leave the RemoteFileProducer in a state where it is connected but not logged in 
and no attempt reconnect is attempted, if the soTimeout as set by 
_ftpClient.defaultTimeout_ is set to zero then it can cause a subsequent 
command will wait for a reply indefinitely.

{noformat}
Caused by: java.io.IOException: Timed out waiting for initial connect reply
at org.apache.commons.net.ftp.FTP._connectAction_(FTP.java:389) 
~[commons-net-3.1.jar:3.1]
at 
org.apache.commons.net.ftp.FTPClient._connectAction_(FTPClient.java:796) 
~[commons-net-3.1.jar:3.1]
at org.apache.commons.net.SocketClient.connect(SocketClient.java:172) 
~[commons-net-3.1.jar:3.1]
at org.apache.commons.net.SocketClient.connect(SocketClient.java:192) 
~[commons-net-3.1.jar:3.1]
at 
org.apache.camel.component.file.remote.FtpOperations.connect(FtpOperations.java:95)
 ~[camel-ftp-2.13.1.jar:2.13.1]
{noformat}

The RemoteFileProducer will enter this block as the loggedIn state has not yet 
been reached, however the existing broken socket is reused.

{code}
// recover by re-creating operations which should most likely be able 
to recover
if (!loggedIn) {
log.debug(Trying to recover connection to: {} with a fresh 
client., getEndpoint());
setOperations(getEndpoint().createRemoteFileOperations());
connectIfNecessary();
}
{code}

Yet the _connectIfNecessary()_ method will return immediately since the check 
condition is based on socket connection and takes no account of whether login 
was achieved so the 'dead' socket is reused.

{code}
protected void connectIfNecessary() throws 
GenericFileOperationFailedException {
// This will be skipped when loggedIn = false and the socket is 
connected
if (!getOperations().isConnected()) {
log.debug(Not already connected/logged in. Connecting to: {}, 
getEndpoint());
RemoteFileConfiguration config = getEndpoint().getConfiguration();
loggedIn = getOperations().connect(config);
if (!loggedIn) {
return;
}
log.info(Connected and logged in to:  + getEndpoint());
}
}
{code}

A dirty test that blocks of this blocking condition:

{code}
package ftp;

import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.JndiRegistry;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.apache.commons.net.ftp.FTPClient;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.mockftpserver.fake.FakeFtpServer;
import org.mockito.Mockito;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;

import java.io.IOException;
import java.io.InputStream;
import java.net.Socket;
import java.net.SocketException;
import java.net.SocketTimeoutException;
import java.util.concurrent.atomic.AtomicBoolean;

import javax.net.SocketFactory;

import static org.mockito.Matchers.anyInt;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

public class FtpInitialConnectTimeoutTest extends CamelTestSupport {

  private static final int CONNECT_TIMEOUT = 11223;

  /**
   * Create the answer for the socket factory that causes a 
SocketTimeoutException to occur in connect.
   */
  private static class SocketAnswer implements AnswerSocket {
@Override
public Socket answer(InvocationOnMock invocation) throws Throwable {
  final Socket socket = Mockito.spy(new Socket());
  final AtomicBoolean timeout = new AtomicBoolean();

  try {
doAnswer(new AnswerInputStream() {
  @Override
  public InputStream answer(InvocationOnMock invocation) throws 
Throwable {
final InputStream stream = (InputStream) 
invocation.callRealMethod();

InputStream inputStream = new InputStream() {
  @Override
  public int read() throws IOException {
if (timeout.get()) {
  // emulate a timeout 

[jira] [Updated] (CAMEL-7644) Scala camel DSL creates numerous DefaultCamelContext instances

2014-07-29 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-7644:


Description: 
Since the camel DSL is invoked prior to 
`.addRoutesToCamelContext(CamelContext)` being invoked there is no camel 
context set on the delegate java RouteBuilder which causes it to create a new 
context when the first dsl method is invoked.

With the implementation of CAMEL-7327 introduced in 2.13.1 which stores created 
camel contexts in a set in `Container.Instance#CONTEXT`; this causes instances 
of DefaultCamelContext to be leaked, they are never removed from the static 
set. This is especially aparrent during unit testing.

The following test shows that an additional context is registered for the scala 
route builder as opposed to java. Verification of the leak can be requires 
profiler and capturing of heap after termination of the test case (in 
ParentRunner.java).

{code:java}
package org.apache.camel.scala.dsl.builder;

import com.google.common.collect.Sets;

import org.apache.camel.CamelContext;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.spi.Container;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.lang.ref.WeakReference;
import java.util.Set;

import static org.junit.Assert.assertEquals;

public class BuggyScalaTest implements Container {

  SetCamelContext managed = Sets.newHashSet();

  @Before
  public void setUp() throws Exception {
Container.Instance.set(this);
  }

  @After
  public void tearDown() throws Exception {
Container.Instance.set(null);
  }

  @Test
  public void testNameJava() throws Exception {
DefaultCamelContext defaultCamelContext = new DefaultCamelContext();
defaultCamelContext.addRoutes(new RouteBuilder() {
  @Override
  public void configure() throws Exception {
from(direct:start).log(a message);
  }
});
defaultCamelContext.start();

ProducerTemplate producerTemplate = 
defaultCamelContext.createProducerTemplate();
producerTemplate.start();
producerTemplate.sendBody(direct:start, );
producerTemplate.stop();
defaultCamelContext.stop();

assertEquals(1, managed.size());
  }

  @Test
  public void testNameScala() throws Exception {
DefaultCamelContext defaultCamelContext = new DefaultCamelContext();
defaultCamelContext.addRoutes(new SimpleRouteBuilder());
defaultCamelContext.start();

ProducerTemplate producerTemplate = 
defaultCamelContext.createProducerTemplate();
producerTemplate.start();
producerTemplate.sendBody(direct:start, );
producerTemplate.stop();
defaultCamelContext.stop();

assertEquals(1, managed.size()); // will equal 2
  }

  @Override
  public void manage(CamelContext camelContext) {
managed.add(camelContext);
  }
}
{code}

{code:java}
  package com.pressassociation.camel

  import org.apache.camel.scala.dsl.builder.RouteBuilder

  class SimpleRouteBuilder extends RouteBuilder {
from(direct:start).log(a message)
  }
{code}

  was:
Since the camel DSL is invoked prior to 
`.addRoutesToCamelContext(CamelContext)` being invoked there is no camel 
context set on the delegate java RouteBuilder which causes it to create a new 
context when the first dsl method is invoked.

With the implementation of CAMEL-7327 introduced in 2.13.1 which stores created 
camel contexts in a set in `Container.Instance#CONTEXT`; this causes instances 
of DefaultCamelContext to be leaked, they are never removed from the static 
set. This is especially aparrent during unit testing.


 Scala camel DSL creates numerous DefaultCamelContext instances
 --

 Key: CAMEL-7644
 URL: https://issues.apache.org/jira/browse/CAMEL-7644
 Project: Camel
  Issue Type: Bug
  Components: camel-scala
Affects Versions: 2.13.1
Reporter: Bob Browning
Assignee: Willem Jiang

 Since the camel DSL is invoked prior to 
 `.addRoutesToCamelContext(CamelContext)` being invoked there is no camel 
 context set on the delegate java RouteBuilder which causes it to create a new 
 context when the first dsl method is invoked.
 With the implementation of CAMEL-7327 introduced in 2.13.1 which stores 
 created camel contexts in a set in `Container.Instance#CONTEXT`; this causes 
 instances of DefaultCamelContext to be leaked, they are never removed from 
 the static set. This is especially aparrent during unit testing.
 The following test shows that an additional context is registered for the 
 scala route builder as opposed to java. Verification of the leak can be 
 requires profiler and capturing of heap after termination of the test case 
 (in ParentRunner.java).
 {code:java}
 package 

[jira] [Updated] (CAMEL-7644) Scala camel DSL creates numerous DefaultCamelContext instances

2014-07-29 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-7644:


Description: 
Since the camel DSL is invoked prior to 
`.addRoutesToCamelContext(CamelContext)` being invoked there is no camel 
context set on the delegate java RouteBuilder which causes it to create a new 
context when the first dsl method is invoked.

With the implementation of CAMEL-7327 introduced in 2.13.1 which stores created 
camel contexts in a set in `Container.Instance#CONTEXT`; this causes instances 
of DefaultCamelContext to be leaked, they are never removed from the static 
set. This is especially aparrent during unit testing.

The following test shows that an additional context is registered for the scala 
route builder as opposed to java. Verification of the leak can be requires 
profiler and capturing of heap after termination of the test case (in 
ParentRunner.java).

{code:java}
package org.apache.camel.scala.dsl.builder;

import com.google.common.collect.Sets;

import org.apache.camel.CamelContext;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.spi.Container;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.lang.ref.WeakReference;
import java.util.Set;

import static org.junit.Assert.assertEquals;

public class BuggyScalaTest implements Container {

  SetCamelContext managed = Sets.newHashSet();

  @Before
  public void setUp() throws Exception {
Container.Instance.set(this);
  }

  @After
  public void tearDown() throws Exception {
Container.Instance.set(null);
  }

  @Test
  public void testNameJava() throws Exception {
DefaultCamelContext defaultCamelContext = new DefaultCamelContext();
defaultCamelContext.addRoutes(new RouteBuilder() {
  @Override
  public void configure() throws Exception {
from(direct:start).log(a message);
  }
});
defaultCamelContext.start();

ProducerTemplate producerTemplate = 
defaultCamelContext.createProducerTemplate();
producerTemplate.start();
producerTemplate.sendBody(direct:start, );
producerTemplate.stop();
defaultCamelContext.stop();

assertEquals(1, managed.size());
  }

  @Test
  public void testNameScala() throws Exception {
DefaultCamelContext defaultCamelContext = new DefaultCamelContext();
defaultCamelContext.addRoutes(new SimpleRouteBuilder());
defaultCamelContext.start();

ProducerTemplate producerTemplate = 
defaultCamelContext.createProducerTemplate();
producerTemplate.start();
producerTemplate.sendBody(direct:start, );
producerTemplate.stop();
defaultCamelContext.stop();

assertEquals(1, managed.size()); // will equal 2
  }

  @Override
  public void manage(CamelContext camelContext) {
managed.add(camelContext);
  }
}
{code}

{code:java}
  package org.apache.camel.scala.dsl.builder

  import org.apache.camel.scala.dsl.builder.RouteBuilder

  class SimpleRouteBuilder extends RouteBuilder {
from(direct:start).log(a message)
  }
{code}

  was:
Since the camel DSL is invoked prior to 
`.addRoutesToCamelContext(CamelContext)` being invoked there is no camel 
context set on the delegate java RouteBuilder which causes it to create a new 
context when the first dsl method is invoked.

With the implementation of CAMEL-7327 introduced in 2.13.1 which stores created 
camel contexts in a set in `Container.Instance#CONTEXT`; this causes instances 
of DefaultCamelContext to be leaked, they are never removed from the static 
set. This is especially aparrent during unit testing.

The following test shows that an additional context is registered for the scala 
route builder as opposed to java. Verification of the leak can be requires 
profiler and capturing of heap after termination of the test case (in 
ParentRunner.java).

{code:java}
package org.apache.camel.scala.dsl.builder;

import com.google.common.collect.Sets;

import org.apache.camel.CamelContext;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.spi.Container;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.lang.ref.WeakReference;
import java.util.Set;

import static org.junit.Assert.assertEquals;

public class BuggyScalaTest implements Container {

  SetCamelContext managed = Sets.newHashSet();

  @Before
  public void setUp() throws Exception {
Container.Instance.set(this);
  }

  @After
  public void tearDown() throws Exception {
Container.Instance.set(null);
  }

  @Test
  public void testNameJava() throws Exception {
DefaultCamelContext defaultCamelContext = new DefaultCamelContext();
defaultCamelContext.addRoutes(new RouteBuilder() {
  @Override
  public void 

[jira] [Created] (CAMEL-7644) Scala camel DSL creates numerous DefaultCamelContext instances

2014-07-28 Thread Bob Browning (JIRA)
Bob Browning created CAMEL-7644:
---

 Summary: Scala camel DSL creates numerous DefaultCamelContext 
instances
 Key: CAMEL-7644
 URL: https://issues.apache.org/jira/browse/CAMEL-7644
 Project: Camel
  Issue Type: Bug
  Components: camel-scala
Affects Versions: 2.13.1
Reporter: Bob Browning


Since the camel DSL is invoked prior to 
`.addRoutesToCamelContext(CamelContext)` being invoked there is no camel 
context set on the delegate java RouteBuilder which causes it to create a new 
context when the first dsl method is invoked.

With the implementation of CAMEL-7327 introduced in 2.13.1 which stores created 
camel contexts in a set in `Container.Instance#CONTEXT`; this causes instances 
of DefaultCamelContext to be leaked, they are never removed from the static 
set. This is especially aparrent during unit testing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CAMEL-7500) Concurrent modification of exchange during retry leads to futher processing of failed messages

2014-06-11 Thread Bob Browning (JIRA)
Bob Browning created CAMEL-7500:
---

 Summary: Concurrent modification of exchange during retry leads to 
futher processing of failed messages
 Key: CAMEL-7500
 URL: https://issues.apache.org/jira/browse/CAMEL-7500
 Project: Camel
  Issue Type: Bug
  Components: camel-netty
Affects Versions: 2.13.1
Reporter: Bob Browning


When a exception occurs on a netty TCP channel such as ChanelClosedException 
then there are two invocations of the producer callback. 

If there is a redelivery handler configured this causes either two threads to 
be added to the scheduled thread-pool which then compete or in the more common 
case the first invocation adds the redelivery thread but in doing so clears the 
exception from the exchange such that when the subsequent callback invocation 
occurs it see's the event as a success and continues routing of the exchange.

Note this also seems to be a cause of negative inflight messages on the route.

The first callback invocation occurs in the ChannelFutureListener which is the 
usual case.

The second callback invocation which comes from the ClientChannelHandler 
registered in the DefaultClientPipelineFactory used by the NettyProducer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CAMEL-7500) Concurrent modification of exchange during retry leads to futher processing of failed messages

2014-06-11 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-7500:


Attachment: NettyRedeliveryTest.java

 Concurrent modification of exchange during retry leads to futher processing 
 of failed messages
 --

 Key: CAMEL-7500
 URL: https://issues.apache.org/jira/browse/CAMEL-7500
 Project: Camel
  Issue Type: Bug
  Components: camel-netty
Affects Versions: 2.13.1
Reporter: Bob Browning
 Attachments: NettyRedeliveryTest.java


 When a exception occurs on a netty TCP channel such as ChanelClosedException 
 then there are two invocations of the producer callback. 
 If there is a redelivery handler configured this causes either two threads to 
 be added to the scheduled thread-pool which then compete or in the more 
 common case the first invocation adds the redelivery thread but in doing so 
 clears the exception from the exchange such that when the subsequent callback 
 invocation occurs it see's the event as a success and continues routing of 
 the exchange.
 Note this also seems to be a cause of negative inflight messages on the route.
 The first callback invocation occurs in the ChannelFutureListener which is 
 the usual case.
 The second callback invocation which comes from the ClientChannelHandler 
 registered in the DefaultClientPipelineFactory used by the NettyProducer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CAMEL-7500) Concurrent modification of exchange during retry after netty TCP failure leads to futher processing of failed messages

2014-06-11 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-7500:


Summary: Concurrent modification of exchange during retry after netty TCP 
failure leads to futher processing of failed messages  (was: Concurrent 
modification of exchange during retry leads to futher processing of failed 
messages)

 Concurrent modification of exchange during retry after netty TCP failure 
 leads to futher processing of failed messages
 --

 Key: CAMEL-7500
 URL: https://issues.apache.org/jira/browse/CAMEL-7500
 Project: Camel
  Issue Type: Bug
  Components: camel-netty
Affects Versions: 2.13.1
Reporter: Bob Browning
 Attachments: NettyRedeliveryTest.java


 When a exception occurs on a netty TCP channel such as ChanelClosedException 
 then there are two invocations of the producer callback. 
 If there is a redelivery handler configured this causes either two threads to 
 be added to the scheduled thread-pool which then compete or in the more 
 common case the first invocation adds the redelivery thread but in doing so 
 clears the exception from the exchange such that when the subsequent callback 
 invocation occurs it see's the event as a success and continues routing of 
 the exchange.
 Note this also seems to be a cause of negative inflight messages on the route.
 The first callback invocation occurs in the ChannelFutureListener which is 
 the usual case.
 The second callback invocation which comes from the ClientChannelHandler 
 registered in the DefaultClientPipelineFactory used by the NettyProducer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CAMEL-7500) Concurrent modification of exchange during retry after netty TCP failure leads to futher processing of failed messages

2014-06-11 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-7500:


Attachment: NettyRedeliveryTest.java

 Concurrent modification of exchange during retry after netty TCP failure 
 leads to futher processing of failed messages
 --

 Key: CAMEL-7500
 URL: https://issues.apache.org/jira/browse/CAMEL-7500
 Project: Camel
  Issue Type: Bug
  Components: camel-netty
Affects Versions: 2.13.1
Reporter: Bob Browning
 Attachments: NettyRedeliveryTest.java


 When a exception occurs on a netty TCP channel such as ChanelClosedException 
 then there are two invocations of the producer callback. 
 If there is a redelivery handler configured this causes either two threads to 
 be added to the scheduled thread-pool which then compete or in the more 
 common case the first invocation adds the redelivery thread but in doing so 
 clears the exception from the exchange such that when the subsequent callback 
 invocation occurs it see's the event as a success and continues routing of 
 the exchange.
 Note this also seems to be a cause of negative inflight messages on the route.
 The first callback invocation occurs in the ChannelFutureListener which is 
 the usual case.
 The second callback invocation which comes from the ClientChannelHandler 
 registered in the DefaultClientPipelineFactory used by the NettyProducer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CAMEL-7500) Concurrent modification of exchange during retry after netty TCP failure leads to futher processing of failed messages

2014-06-11 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-7500:


Attachment: (was: NettyRedeliveryTest.java)

 Concurrent modification of exchange during retry after netty TCP failure 
 leads to futher processing of failed messages
 --

 Key: CAMEL-7500
 URL: https://issues.apache.org/jira/browse/CAMEL-7500
 Project: Camel
  Issue Type: Bug
  Components: camel-netty
Affects Versions: 2.13.1
Reporter: Bob Browning
 Attachments: NettyRedeliveryTest.java


 When a exception occurs on a netty TCP channel such as ChanelClosedException 
 then there are two invocations of the producer callback. 
 If there is a redelivery handler configured this causes either two threads to 
 be added to the scheduled thread-pool which then compete or in the more 
 common case the first invocation adds the redelivery thread but in doing so 
 clears the exception from the exchange such that when the subsequent callback 
 invocation occurs it see's the event as a success and continues routing of 
 the exchange.
 Note this also seems to be a cause of negative inflight messages on the route.
 The first callback invocation occurs in the ChannelFutureListener which is 
 the usual case.
 The second callback invocation which comes from the ClientChannelHandler 
 registered in the DefaultClientPipelineFactory used by the NettyProducer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CAMEL-7276) camel-quartz - use of management name to provide default scheduler name breaks context isolation

2014-03-06 Thread Bob Browning (JIRA)
Bob Browning created CAMEL-7276:
---

 Summary: camel-quartz - use of management name to provide default 
scheduler name breaks context isolation
 Key: CAMEL-7276
 URL: https://issues.apache.org/jira/browse/CAMEL-7276
 Project: Camel
  Issue Type: Bug
  Components: camel-quartz
Affects Versions: 2.12.3
Reporter: Bob Browning


When using the camel-quartz component in an unmanged context with multiple 
camel contexts, for example in a JUnit test case, causes the scheduler to be 
created with the instance name DefaultQuartzScheduler which is then shared 
across all camel context's within the same jvm.

This is in contradiction of the previous behaviour that uses 
`getCamelContext.getName()` which isolates the scheduler by denoting that the 
default instance is specific to the camel context.

{code}
package org.apache.camel.component.quartz;

import org.apache.camel.CamelContext;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.management.JmxSystemPropertyKeys;
import org.junit.Test;
import org.quartz.Scheduler;
import org.quartz.SchedulerException;

import static org.junit.Assert.assertNotEquals;
import static org.junit.Assert.assertNotSame;

/**
 * Test regression of camel-context isolation of default scheduler instance 
introduced in CAMEL-7034.
 *
 * @author Bob Browning bob.brown...@pressassociation.com
 */
public class QuartzComponentCamelContextSchedulerIsolationTest {

  @Test
  public void testSchedulerIsolation_unmanaged() throws Exception {
disableJMX();
testSchedulerIsolation();
  }

  @Test
  public void testSchedulerIsolation_managed() throws Exception {
enableJMX();
testSchedulerIsolation();
  }

  private void testSchedulerIsolation() throws Exception {
CamelContext context = createCamelContext();
context.start();

CamelContext anotherContext = createCamelContext();
assertNotEquals(anotherContext.getName(), context.getName());
assertNotEquals(anotherContext, context);

assertNotSame(getDefaultScheduler(context), 
getDefaultScheduler(anotherContext));
  }

  /**
   * Create a new camel context instance.
   */
  private DefaultCamelContext createCamelContext() {
return new DefaultCamelContext();
  }

  /**
   * Get the quartz component for the provided camel context.
   */
  private QuartzComponent getQuartzComponent(CamelContext context) {
return context.getComponent(quartz, QuartzComponent.class);
  }

  /**
   * Get the default scheduler for the provided camel context.
   */
  private Scheduler getDefaultScheduler(CamelContext context) throws 
SchedulerException {
return getQuartzComponent(context).getFactory().getScheduler();
  }

  /**
   * Disables the JMX agent.
   */
  private void disableJMX() {
System.setProperty(JmxSystemPropertyKeys.DISABLED, true);
  }

  /**
   * Enables the JMX agent.
   */
  private void enableJMX() {
System.setProperty(JmxSystemPropertyKeys.DISABLED, false);
  }

}
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CAMEL-7276) camel-quartz - use of management name to provide default scheduler name breaks context isolation

2014-03-06 Thread Bob Browning (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Browning updated CAMEL-7276:


Description: 
When using the camel-quartz component in an unmanged context with multiple 
camel contexts, for example in a JUnit test case, causes the scheduler to be 
created with the instance name DefaultQuartzScheduler which is then shared 
across all camel context's within the same jvm.

This is in contradiction of the previous behaviour that uses 
`getCamelContext.getName()` which isolates the scheduler by denoting that the 
default instance is specific to the camel context.

{code}
package org.apache.camel.component.quartz;

import org.apache.camel.CamelContext;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.management.JmxSystemPropertyKeys;
import org.junit.Test;
import org.quartz.Scheduler;
import org.quartz.SchedulerException;

import static org.junit.Assert.assertNotEquals;
import static org.junit.Assert.assertNotSame;

/**
 * Test regression of camel-context isolation of default scheduler instance 
introduced in CAMEL-7034.
 */
public class QuartzComponentCamelContextSchedulerIsolationTest {

  @Test
  public void testSchedulerIsolation_unmanaged() throws Exception {
disableJMX();
testSchedulerIsolation();
  }

  @Test
  public void testSchedulerIsolation_managed() throws Exception {
enableJMX();
testSchedulerIsolation();
  }

  private void testSchedulerIsolation() throws Exception {
CamelContext context = createCamelContext();
context.start();

CamelContext anotherContext = createCamelContext();
assertNotEquals(anotherContext.getName(), context.getName());
assertNotEquals(anotherContext, context);

assertNotSame(getDefaultScheduler(context), 
getDefaultScheduler(anotherContext));
  }

  /**
   * Create a new camel context instance.
   */
  private DefaultCamelContext createCamelContext() {
return new DefaultCamelContext();
  }

  /**
   * Get the quartz component for the provided camel context.
   */
  private QuartzComponent getQuartzComponent(CamelContext context) {
return context.getComponent(quartz, QuartzComponent.class);
  }

  /**
   * Get the default scheduler for the provided camel context.
   */
  private Scheduler getDefaultScheduler(CamelContext context) throws 
SchedulerException {
return getQuartzComponent(context).getFactory().getScheduler();
  }

  /**
   * Disables the JMX agent.
   */
  private void disableJMX() {
System.setProperty(JmxSystemPropertyKeys.DISABLED, true);
  }

  /**
   * Enables the JMX agent.
   */
  private void enableJMX() {
System.setProperty(JmxSystemPropertyKeys.DISABLED, false);
  }

}
{code}

  was:
When using the camel-quartz component in an unmanged context with multiple 
camel contexts, for example in a JUnit test case, causes the scheduler to be 
created with the instance name DefaultQuartzScheduler which is then shared 
across all camel context's within the same jvm.

This is in contradiction of the previous behaviour that uses 
`getCamelContext.getName()` which isolates the scheduler by denoting that the 
default instance is specific to the camel context.

{code}
package org.apache.camel.component.quartz;

import org.apache.camel.CamelContext;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.management.JmxSystemPropertyKeys;
import org.junit.Test;
import org.quartz.Scheduler;
import org.quartz.SchedulerException;

import static org.junit.Assert.assertNotEquals;
import static org.junit.Assert.assertNotSame;

/**
 * Test regression of camel-context isolation of default scheduler instance 
introduced in CAMEL-7034.
 *
 * @author Bob Browning bob.brown...@pressassociation.com
 */
public class QuartzComponentCamelContextSchedulerIsolationTest {

  @Test
  public void testSchedulerIsolation_unmanaged() throws Exception {
disableJMX();
testSchedulerIsolation();
  }

  @Test
  public void testSchedulerIsolation_managed() throws Exception {
enableJMX();
testSchedulerIsolation();
  }

  private void testSchedulerIsolation() throws Exception {
CamelContext context = createCamelContext();
context.start();

CamelContext anotherContext = createCamelContext();
assertNotEquals(anotherContext.getName(), context.getName());
assertNotEquals(anotherContext, context);

assertNotSame(getDefaultScheduler(context), 
getDefaultScheduler(anotherContext));
  }

  /**
   * Create a new camel context instance.
   */
  private DefaultCamelContext createCamelContext() {
return new DefaultCamelContext();
  }

  /**
   * Get the quartz component for the provided camel context.
   */
  private QuartzComponent getQuartzComponent(CamelContext context) {
return context.getComponent(quartz, QuartzComponent.class);
  }

  /**
   * Get the default scheduler for the provided camel context.
   */
  private Scheduler