[jira] [Commented] (AMQ-4970) Deletion of a queue inaffective across broker restart

2014-01-16 Thread Arthur Naseef (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874456#comment-13874456
 ] 

Arthur Naseef commented on AMQ-4970:


The GC of the destination appears to be the missing link.  Adding a 
Thread.sleep(6) to the test causes 100% failure - of both forms of deletion.

> Deletion of a queue inaffective across broker restart
> -
>
> Key: AMQ-4970
> URL: https://issues.apache.org/jira/browse/AMQ-4970
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: mac osx/mavericks
>Reporter: Arthur Naseef
> Attachments: AMQ4970Test.zip, AMQ4970Test.zip
>
>
> Deleting a queue, it is revived from persistent store after a broker restart. 
>  The following steps reproduce the problem:
> * Create a queue (confirmed using the REST client I/F)
> * Shutdown the broker
> * Startup the broker
> * Confirm queue still exists via the hawtio ui (correct operation so far)
> * Delete the queue
> * Confirm queue removed via the hawtio ui
> * Shutdown the broker
> * Startup the broker
> * Confirm queue was not recreated via hawtio ui (failed: queue still exists)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4970) Deletion of a queue inaffective across broker restart

2014-01-16 Thread Arthur Naseef (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874446#comment-13874446
 ] 

Arthur Naseef commented on AMQ-4970:


This is interesting - turned on DEBUG logging on 
org.apache.activemq.broker.region and found this:

{noformat}
2014-01-16 22:10:18,028 | DEBUG | localhost removing destination: queue://TEST2 
| org.apache.activemq.broker.region.AbstractRegion | ActiveMQ Transport: 
tcp:///127.0.0.1:49946@61616
2014-01-16 22:10:18,033 | DEBUG | localhost adding destination: 
topic://ActiveMQ.Advisory.Queue | 
org.apache.activemq.broker.region.AbstractRegion | ActiveMQ Transport: 
tcp:///127.0.0.1:49946@61616
2014-01-16 22:10:18,044 | DEBUG | localhost removing consumer: 
ID:Arthur-Naseefs-MacBook-Pro.local-49945-1389935417793-1:1:-1:1 for 
destination: ActiveMQ.Advisory.TempQueue,ActiveMQ.Advisory.TempTopic | 
org.apache.activemq.broker.region.AbstractRegion | ActiveMQ Transport: 
tcp:///127.0.0.1:49946@61616
2014-01-16 22:10:38,199 | DEBUG | queue://TEST2 expiring messages .. | 
org.apache.activemq.broker.region.Queue | ActiveMQ Broker[localhost] Scheduler
2014-01-16 22:10:38,201 | DEBUG | TEST2 toPageIn: 0, Inflight: 0, 
pagedInMessages.size 0, enqueueCount: 0, dequeueCount: 0 | 
org.apache.activemq.broker.region.Queue | ActiveMQ Broker[localhost] Scheduler
2014-01-16 22:10:38,203 | DEBUG | queue://TEST2 expiring messages done. | 
org.apache.activemq.broker.region.Queue | ActiveMQ Broker[localhost] Scheduler
{noformat}

Watching the contents of the db.redo file, I see the destination name removed, 
and then return.

> Deletion of a queue inaffective across broker restart
> -
>
> Key: AMQ-4970
> URL: https://issues.apache.org/jira/browse/AMQ-4970
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: mac osx/mavericks
>Reporter: Arthur Naseef
> Attachments: AMQ4970Test.zip, AMQ4970Test.zip
>
>
> Deleting a queue, it is revived from persistent store after a broker restart. 
>  The following steps reproduce the problem:
> * Create a queue (confirmed using the REST client I/F)
> * Shutdown the broker
> * Startup the broker
> * Confirm queue still exists via the hawtio ui (correct operation so far)
> * Delete the queue
> * Confirm queue removed via the hawtio ui
> * Shutdown the broker
> * Startup the broker
> * Confirm queue was not recreated via hawtio ui (failed: queue still exists)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4970) Deletion of a queue inaffective across broker restart

2014-01-16 Thread Arthur Naseef (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874414#comment-13874414
 ] 

Arthur Naseef commented on AMQ-4970:


Trying the exact same config with leveldb, the problem does not happen.

> Deletion of a queue inaffective across broker restart
> -
>
> Key: AMQ-4970
> URL: https://issues.apache.org/jira/browse/AMQ-4970
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: mac osx/mavericks
>Reporter: Arthur Naseef
> Attachments: AMQ4970Test.zip, AMQ4970Test.zip
>
>
> Deleting a queue, it is revived from persistent store after a broker restart. 
>  The following steps reproduce the problem:
> * Create a queue (confirmed using the REST client I/F)
> * Shutdown the broker
> * Startup the broker
> * Confirm queue still exists via the hawtio ui (correct operation so far)
> * Delete the queue
> * Confirm queue removed via the hawtio ui
> * Shutdown the broker
> * Startup the broker
> * Confirm queue was not recreated via hawtio ui (failed: queue still exists)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4938) Queue Messages lost after read timeout on REST API.

2014-01-16 Thread Arthur Naseef (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874391#comment-13874391
 ] 

Arthur Naseef commented on AMQ-4938:


Great!  Thanks Timothy.

> Queue Messages lost after read timeout on REST API.
> ---
>
> Key: AMQ-4938
> URL: https://issues.apache.org/jira/browse/AMQ-4938
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.8.0, 5.9.0, 5.10.0
> Environment: Win32, Linux
>Reporter: Peter Eisenlohr
>Priority: Critical
> Attachments: AMQ-4938.patch, AMQ-4938B.patch
>
>
> I have been trying to send/receive messages via a Queue using the [REST 
> API|http://activemq.apache.org/rest.html]. While testing I found that some 
> messages got lost after a consuming request times out when no message is 
> available.
> Here is a transcript of the test case I used:
> {code}
> #
> # OK: send first, consume later
> #
> $ curl -d "body=message" "http://localhost:8161/api/message/TEST?type=queue";
> Message sent
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=1000";
> message
> #
> # OK: start consuming, then send (within timeout)
> #
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000"&;
> [1] 5172
> $ curl -d "body=message" "http://localhost:8161/api/message/TEST?type=queue";
> messageMessage sent[1]+  Fertig  wget --no-http-keep-alive -q 
> -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000";
> #
> # NOK: start consuming, wait for timeout, then send and consume again
> #
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000";
> $ curl -d "body=message" "http://localhost:8161/api/message/TEST?type=queue";
> Message sent
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000";
> {code}
> The last *wget* returns after the given read timeout without any message. 
> When looking at the managament console, the message has been consumed.
> I tested this with 5.8.0 on linux as well as with 5.8.0, 5.9.0 and a freshly 
> built 5.10.0 on windows.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQ-4977) Memory leak in ConnectionStateTracker when browsing non-empty queues

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQ-4977:
--

Priority: Major  (was: Critical)

> Memory leak in ConnectionStateTracker when browsing non-empty queues
> 
>
> Key: AMQ-4977
> URL: https://issues.apache.org/jira/browse/AMQ-4977
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0, 5.7.0, 5.8.0, 5.9.0
>Reporter: Georgi Danov
>
> I think I found case that is not handled by the fix AMQ-3316. We see memory 
> leaks connected to this bug and after good amount of headbanging I think the 
> problem is in how two methods work together - processMessagePull and 
> trackBack. The first one has this piece of code:
> {code}
>   // leave a single instance in the cache
> final String id = pull.getDestination() + "::" + 
> pull.getConsumerId();
> messageCache.put(id.intern(), pull);
> {code}
> while the second one unconditionally increases the currentCacheSize - 
> regardless if the previous method added or replaced entry in the cache.
> The situation where entries will be replaced (and not added) and the 
> currentCacheSize will grow very fast until it wraps around and becomes 
> negative is the following:
> * have some logic that frequently creates queue browser and iterates through 
> all the entries
> * have the queue most of the time with at least one message. The more 
> messages in the queue the faster currentCacheSize grows.
> The reason is that processMessagePull reuses the consumer and destination ID 
> for each browsed message in the queue when 
> org.apache.activemq.ActiveMQQueueBrowser#hasMoreElements is invoked. 
> trackBack is ignorant of this and keeps adding to the size regardless that 
> the cache size stays the same.
> Here is log from reproducing the issue as a proof:
> {code}
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 1, 5 elements, pending scans:10, memory: 8951KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 2, 10 elements, pending scans:10, memory: 10845KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 3, 15 elements, pending scans:10, memory: 12645KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 4, 20 elements, pending scans:10, memory: 10363KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 5, 25 elements, pending scans:10, memory: 12169KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 6, 30 elements, pending scans:10, memory: 9852KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 7, 35 elements, pending scans:10, memory: 11657KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 8, 40 elements, pending scans:10, memory: 9401KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 9, 45 elements, pending scans:10, memory: 11222KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 10, 50 elements, pending scans:10, memory: 13047KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 11, 55 elements, pending scans:10, memory: 10767KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 12, 60 elements, pending scans:10, memory: 12567KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 13, 65 elements, pending scans:10, memory: 10256KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 154800, 67 elements, pending scans:10, memory: 11566KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 162800, 67 elements, pending scans:10, memory: 9225KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 170800, 67 elements, pending scans:10, memory: 11013KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 178800, 67 elements, pending scans:10, memory: 12812KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 186800, 67 elements, pe

[jira] [Commented] (AMQ-4977) Memory leak in ConnectionStateTracker when browsing non-empty queues

2014-01-16 Thread Georgi Danov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874189#comment-13874189
 ] 

Georgi Danov commented on AMQ-4977:
---

I have one. The dump is from it. But it is coupled to tons of classes from our 
project which I cannot easily extract. I believe I have described the scenario 
in good detail for anybody who is intimate with the class to understand. I 
understand test would be much more comfortable for whoever wants to fix it, 
however I need to get some sleep so I have to stop here.

As for proving - I have heapdumps from our production servers showing both the 
negative sizecounter and the map containing gazillions of pullCommands, I have 
our custom test, I have spent 6 hours today debugging to nail it down. It is 
there, and I am confident about the scenario I have described.

> Memory leak in ConnectionStateTracker when browsing non-empty queues
> 
>
> Key: AMQ-4977
> URL: https://issues.apache.org/jira/browse/AMQ-4977
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0, 5.7.0, 5.8.0, 5.9.0
>Reporter: Georgi Danov
>Priority: Critical
>
> I think I found case that is not handled by the fix AMQ-3316. We see memory 
> leaks connected to this bug and after good amount of headbanging I think the 
> problem is in how two methods work together - processMessagePull and 
> trackBack. The first one has this piece of code:
> {code}
>   // leave a single instance in the cache
> final String id = pull.getDestination() + "::" + 
> pull.getConsumerId();
> messageCache.put(id.intern(), pull);
> {code}
> while the second one unconditionally increases the currentCacheSize - 
> regardless if the previous method added or replaced entry in the cache.
> The situation where entries will be replaced (and not added) and the 
> currentCacheSize will grow very fast until it wraps around and becomes 
> negative is the following:
> * have some logic that frequently creates queue browser and iterates through 
> all the entries
> * have the queue most of the time with at least one message. The more 
> messages in the queue the faster currentCacheSize grows.
> The reason is that processMessagePull reuses the consumer and destination ID 
> for each browsed message in the queue when 
> org.apache.activemq.ActiveMQQueueBrowser#hasMoreElements is invoked. 
> trackBack is ignorant of this and keeps adding to the size regardless that 
> the cache size stays the same.
> Here is log from reproducing the issue as a proof:
> {code}
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 1, 5 elements, pending scans:10, memory: 8951KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 2, 10 elements, pending scans:10, memory: 10845KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 3, 15 elements, pending scans:10, memory: 12645KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 4, 20 elements, pending scans:10, memory: 10363KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 5, 25 elements, pending scans:10, memory: 12169KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 6, 30 elements, pending scans:10, memory: 9852KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 7, 35 elements, pending scans:10, memory: 11657KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 8, 40 elements, pending scans:10, memory: 9401KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 9, 45 elements, pending scans:10, memory: 11222KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 10, 50 elements, pending scans:10, memory: 13047KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 11, 55 elements, pending scans:10, memory: 10767KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 12, 60 elements, pending scans:10, memory: 12567KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 13, 65 elements, pending scans:10, memory: 10256KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745K

[jira] [Commented] (AMQ-4977) Memory leak in ConnectionStateTracker when browsing non-empty queues

2014-01-16 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874140#comment-13874140
 ] 

Timothy Bish commented on AMQ-4977:
---

Could you work up a JUnit test case against the ConnectionStateTracker to prove 
the problem exists?  

> Memory leak in ConnectionStateTracker when browsing non-empty queues
> 
>
> Key: AMQ-4977
> URL: https://issues.apache.org/jira/browse/AMQ-4977
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0, 5.7.0, 5.8.0, 5.9.0
>Reporter: Georgi Danov
>Priority: Critical
>
> I think I found case that is not handled by the fix AMQ-3316. We see memory 
> leaks connected to this bug and after good amount of headbanging I think the 
> problem is in how two methods work together - processMessagePull and 
> trackBack. The first one has this piece of code:
> {code}
>   // leave a single instance in the cache
> final String id = pull.getDestination() + "::" + 
> pull.getConsumerId();
> messageCache.put(id.intern(), pull);
> {code}
> while the second one unconditionally increases the currentCacheSize - 
> regardless if the previous method added or replaced entry in the cache.
> The situation where entries will be replaced (and not added) and the 
> currentCacheSize will grow very fast until it wraps around and becomes 
> negative is the following:
> * have some logic that frequently creates queue browser and iterates through 
> all the entries
> * have the queue most of the time with at least one message. The more 
> messages in the queue the faster currentCacheSize grows.
> The reason is that processMessagePull reuses the consumer and destination ID 
> for each browsed message in the queue when 
> org.apache.activemq.ActiveMQQueueBrowser#hasMoreElements is invoked. 
> trackBack is ignorant of this and keeps adding to the size regardless that 
> the cache size stays the same.
> Here is log from reproducing the issue as a proof:
> {code}
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 1, 5 elements, pending scans:10, memory: 8951KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 2, 10 elements, pending scans:10, memory: 10845KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 3, 15 elements, pending scans:10, memory: 12645KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 4, 20 elements, pending scans:10, memory: 10363KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 5, 25 elements, pending scans:10, memory: 12169KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 6, 30 elements, pending scans:10, memory: 9852KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 7, 35 elements, pending scans:10, memory: 11657KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 8, 40 elements, pending scans:10, memory: 9401KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 9, 45 elements, pending scans:10, memory: 11222KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 10, 50 elements, pending scans:10, memory: 13047KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 11, 55 elements, pending scans:10, memory: 10767KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 12, 60 elements, pending scans:10, memory: 12567KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 13, 65 elements, pending scans:10, memory: 10256KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 154800, 67 elements, pending scans:10, memory: 11566KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 162800, 67 elements, pending scans:10, memory: 9225KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 170800, 67 elements, pending scans:10, memory: 11013KB
> 2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
> currentCacheSize: 178800, 67 elements

[jira] [Updated] (AMQ-4977) Memory leak in ConnectionStateTracker when browsing non-empty queues

2014-01-16 Thread Georgi Danov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Danov updated AMQ-4977:
--

Description: 
I think I found case that is not handled by the fix AMQ-3316. We see memory 
leaks connected to this bug and after good amount of headbanging I think the 
problem is in how two methods work together - processMessagePull and trackBack. 
The first one has this piece of code:
{code}
  // leave a single instance in the cache
final String id = pull.getDestination() + "::" + 
pull.getConsumerId();
messageCache.put(id.intern(), pull);
{code}
while the second one unconditionally increases the currentCacheSize - 
regardless if the previous method added or replaced entry in the cache.

The situation where entries will be replaced (and not added) and the 
currentCacheSize will grow very fast until it wraps around and becomes negative 
is the following:
* have some logic that frequently creates queue browser and iterates through 
all the entries
* have the queue most of the time with at least one message. The more messages 
in the queue the faster currentCacheSize grows.

The reason is that processMessagePull reuses the consumer and destination ID 
for each browsed message in the queue when 
org.apache.activemq.ActiveMQQueueBrowser#hasMoreElements is invoked. trackBack 
is ignorant of this and keeps adding to the size regardless that the cache size 
stays the same.

Here is log from reproducing the issue as a proof:
{code}
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 1, 5 elements, pending scans:10, memory: 8951KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 2, 10 elements, pending scans:10, memory: 10845KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 3, 15 elements, pending scans:10, memory: 12645KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 4, 20 elements, pending scans:10, memory: 10363KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 5, 25 elements, pending scans:10, memory: 12169KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 6, 30 elements, pending scans:10, memory: 9852KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 7, 35 elements, pending scans:10, memory: 11657KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 8, 40 elements, pending scans:10, memory: 9401KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 9, 45 elements, pending scans:10, memory: 11222KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 10, 50 elements, pending scans:10, memory: 13047KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 11, 55 elements, pending scans:10, memory: 10767KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 12, 60 elements, pending scans:10, memory: 12567KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 13, 65 elements, pending scans:10, memory: 10256KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 154800, 67 elements, pending scans:10, memory: 11566KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 162800, 67 elements, pending scans:10, memory: 9225KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 170800, 67 elements, pending scans:10, memory: 11013KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 178800, 67 elements, pending scans:10, memory: 12812KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 186800, 67 elements, pending scans:10, memory: 10522KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 194800, 67 elements, pending scans:10, memory: 12328KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 202800, 67 elements, pending scans:10, memory: KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 210800, 67 elements, pending scans:10, memory: 11805KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 13107

[jira] [Updated] (AMQ-4977) Memory leak in ConnectionStateTracker when browsing non-empty queues

2014-01-16 Thread Georgi Danov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Danov updated AMQ-4977:
--

Description: 
I think I found case that is not handled by the fix AMQ-3316. We see memory 
leaks connected to this bug and after good amount of headbanging I think the 
problem is in how two methods work together - processMessagePull and trackBack. 
The first one has this piece of code:
  // leave a single instance in the cache
final String id = pull.getDestination() + "::" + 
pull.getConsumerId();
messageCache.put(id.intern(), pull);
while the second one unconditionally increases the currentCacheSize - 
regardless if the previous method added or replaced entry in the cache.
The situation where entries will be replaced (and not added) and the 
currentCacheSize will grow very fast until it wraps around and becomes negative 
is the following:
have some logic that frequently creates queue browser and iterates through all 
the entries
have the queue most of the time with at least one message. The more messages in 
the queue the faster currentCacheSize grows.
Here is log from reproducing the issue as a proof:
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 1, 5 elements, pending scans:10, memory: 8951KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 2, 10 elements, pending scans:10, memory: 10845KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 3, 15 elements, pending scans:10, memory: 12645KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 4, 20 elements, pending scans:10, memory: 10363KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 5, 25 elements, pending scans:10, memory: 12169KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 6, 30 elements, pending scans:10, memory: 9852KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 7, 35 elements, pending scans:10, memory: 11657KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 8, 40 elements, pending scans:10, memory: 9401KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 9, 45 elements, pending scans:10, memory: 11222KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 10, 50 elements, pending scans:10, memory: 13047KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 11, 55 elements, pending scans:10, memory: 10767KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 12, 60 elements, pending scans:10, memory: 12567KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 13, 65 elements, pending scans:10, memory: 10256KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 154800, 67 elements, pending scans:10, memory: 11566KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 162800, 67 elements, pending scans:10, memory: 9225KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 170800, 67 elements, pending scans:10, memory: 11013KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 178800, 67 elements, pending scans:10, memory: 12812KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 186800, 67 elements, pending scans:10, memory: 10522KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 194800, 67 elements, pending scans:10, memory: 12328KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 202800, 67 elements, pending scans:10, memory: KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 210800, 67 elements, pending scans:10, memory: 11805KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 218800, 67 elements, pending scans:10, memory: 9496KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 226800, 67 elements, pending scans:10, memory: 11316KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 234800, 67 e

[jira] [Commented] (AMQ-3316) Memory leak in ConnectionStateTracker with MessagePull objects

2014-01-16 Thread Georgi Danov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874102#comment-13874102
 ] 

Georgi Danov commented on AMQ-3316:
---

created AMQ-4977

> Memory leak in ConnectionStateTracker with MessagePull objects
> --
>
> Key: AMQ-3316
> URL: https://issues.apache.org/jira/browse/AMQ-3316
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.4.2, 5.5.0
>Reporter: Martin Carpella
>Assignee: Gary Tully
> Fix For: 5.6.0
>
> Attachments: connectionstatetracker_fixed.patch
>
>
> We discovered a memory leak in {{ConnectionStateTracker}} in case a 
> long-lived connection with prefetch=0 is used.
> If prefetch=0 is used, {{MessagePull}} objects are enqueued in 
> {{messageCache}} with an estimated size of 400. But in the cache's 
> {{removeEldestEntry()}} method no size is subtracted from 
> {{currentCacheSize}} for {{MessagePull}} instances. This messes with the 
> cache as it will continue to remove objects even if there is space in the 
> cache. But after about 5,368,709 consumed messages this will cause the 
> {{currentCacheSize}} to roll-over maximum integer and become negative. As a 
> consequence, for the next about 5,368,709 no messages will be removed from 
> the cache any longer.
> This sooner or later will trigger out-of-memory conditions, depending on the 
> size of the various pools. In our case this caused out-of-memory in PermGen 
> first, as message IDs seem to be internalized, and PermGen is considerably 
> smaller than the heap.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (AMQ-4977) Memory leak in ConnectionStateTracker when browsing non-empty queues

2014-01-16 Thread Georgi Danov (JIRA)
Georgi Danov created AMQ-4977:
-

 Summary: Memory leak in ConnectionStateTracker when browsing 
non-empty queues
 Key: AMQ-4977
 URL: https://issues.apache.org/jira/browse/AMQ-4977
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.9.0, 5.8.0, 5.7.0, 5.6.0
Reporter: Georgi Danov
Priority: Critical


I think I found case that is not handled by the fix AMQ-3316. We see memory 
leaks connected to this bug and after good amount of headbanging I think the 
problem is in how two methods work together - processMessagePull and trackBack. 
The first one has this piece of code:
  // leave a single instance in the cache
final String id = pull.getDestination() + "::" + 
pull.getConsumerId();
messageCache.put(id.intern(), pull);
while the second one unconditionally increases the currentCacheSize - 
regardless if the previous method added or replaced entry in the cache.
The situation where entries will be replaced (and not added) and the 
currentCacheSize will grow very fast until it wraps around and becomes negative 
is the following:
have some logic that frequently creates queue browser and iterates through all 
the entries
have the queue most of the time with at least one message. The more messages in 
the queue the faster currentCacheSize grows.
Here is log from reproducing the issue as a proof:
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 1, 5 elements, pending scans:10, memory: 8951KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 2, 10 elements, pending scans:10, memory: 10845KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 3, 15 elements, pending scans:10, memory: 12645KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 4, 20 elements, pending scans:10, memory: 10363KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 5, 25 elements, pending scans:10, memory: 12169KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 6, 30 elements, pending scans:10, memory: 9852KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 7, 35 elements, pending scans:10, memory: 11657KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 8, 40 elements, pending scans:10, memory: 9401KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 9, 45 elements, pending scans:10, memory: 11222KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 10, 50 elements, pending scans:10, memory: 13047KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 11, 55 elements, pending scans:10, memory: 10767KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 12, 60 elements, pending scans:10, memory: 12567KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 13, 65 elements, pending scans:10, memory: 10256KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 154800, 67 elements, pending scans:10, memory: 11566KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 162800, 67 elements, pending scans:10, memory: 9225KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 170800, 67 elements, pending scans:10, memory: 11013KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 178800, 67 elements, pending scans:10, memory: 12812KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 186800, 67 elements, pending scans:10, memory: 10522KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 194800, 67 elements, pending scans:10, memory: 12328KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 202800, 67 elements, pending scans:10, memory: KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 210800, 67 elements, pending scans:10, memory: 11805KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 218800, 67 elements, pending scans:10, memory: 9496KB
2014-01-16 23:05:15  WARN 

Re: [DISCUSS] Remove the old ActiveMQ Console

2014-01-16 Thread Gary Tully
I think the web-console should die, letting it rot in a subproject
will not make it more secure,usable nor maintainable.

Then we either -
 1) skin hawtio with an Apache ActiveMQ brand and continue to ship it
 2) document the extension points for third party consoles.

I think dropping needs to be contingent on either 1 or 2.

Imho, hawtio does it right with the jolokia jmx/http bridge and has
some nice extension points so I am in favour of 1

On 2 January 2014 09:59, Robert Davies  wrote:
> The old/original console is no longer fit for purpose, it is hard to 
> maintain, the source of a lot of security issues [1] over the last few years.
>
> There is another thread about using hawtio as the console going forward, and 
> without going into all the gory details it is probably likely that there may 
> be no web console shipped at all in future releases of ActiveMQ. The JMX 
> naming hierarchy was improved for ActiveMQ 5.8, such that its easy to view 
> the running status of an ActiveMQ broker from 3rd party tools such as 
> jconsole, visualvm or hawtio. Regardless of the outcome of the other 
> discussion [2] - It doesn’t help the ActiveMQ project to try and maintain a 
> static web console any more.
>
> I propose we remove the old web console from the ActiveMQ 5.10 release - 
> thoughts ?
>
>
>
> [1] 
> https://issues.apache.org/jira/browse/AMQ-2714?jql=project%20%3D%20AMQ%20AND%20text%20~%20%22XSS%22
> [2] http://activemq.2283324.n4.nabble.com/Default-Web-Console-td4675705.html
>
> Rob Davies
> 
> Red Hat, Inc
> http://hawt.io - #dontcha
> Twitter: rajdavies
> Blog: http://rajdavies.blogspot.com
> ActiveMQ in Action: http://www.manning.com/snyder/
>



-- 
http://redhat.com
http://blog.garytully.com


[jira] [Commented] (AMQ-3316) Memory leak in ConnectionStateTracker with MessagePull objects

2014-01-16 Thread Georgi Danov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874039#comment-13874039
 ] 

Georgi Danov commented on AMQ-3316:
---

I think I found case that is not handled by the fix. We see memory leaks 
connected to this bug and after good amount of headbanging I think the problem 
is in how two methods work together - processMessagePull and trackBack. The 
first one has this piece of code:
{code}
  // leave a single instance in the cache
final String id = pull.getDestination() + "::" + 
pull.getConsumerId();
messageCache.put(id.intern(), pull);
{code}
while the second one unconditionally increases the currentCacheSize - 
regardless if the previous method added or *replaced* entry in the cache.

The situation where entries will be replaced (and not added) and the 
currentCacheSize will grow very fast until it wraps around and becomes negative 
is the following:
* have some logic that frequently creates queue browser and iterates through 
all the entries
* have the queue most of the time with at least one message. The more messages 
in the queue the faster currentCacheSize grows.

Here is log from reproducing the issue as a proof:
{code}
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 1, 5 elements, pending scans:10, memory: 8951KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 2, 10 elements, pending scans:10, memory: 10845KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 3, 15 elements, pending scans:10, memory: 12645KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 4, 20 elements, pending scans:10, memory: 10363KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 5, 25 elements, pending scans:10, memory: 12169KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 6, 30 elements, pending scans:10, memory: 9852KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 7, 35 elements, pending scans:10, memory: 11657KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 8, 40 elements, pending scans:10, memory: 9401KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 9, 45 elements, pending scans:10, memory: 11222KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 10, 50 elements, pending scans:10, memory: 13047KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 11, 55 elements, pending scans:10, memory: 10767KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 12, 60 elements, pending scans:10, memory: 12567KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 13, 65 elements, pending scans:10, memory: 10256KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 154800, 67 elements, pending scans:10, memory: 11566KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 162800, 67 elements, pending scans:10, memory: 9225KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 170800, 67 elements, pending scans:10, memory: 11013KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 178800, 67 elements, pending scans:10, memory: 12812KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 186800, 67 elements, pending scans:10, memory: 10522KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 194800, 67 elements, pending scans:10, memory: 12328KB
2014-01-16 23:05:14  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 202800, 67 elements, pending scans:10, memory: KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 210800, 67 elements, pending scans:10, memory: 11805KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 218800, 67 elements, pending scans:10, memory: 9496KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, 
currentCacheSize: 226800, 67 elements, pending scans:10, memory: 11316KB
2014-01-16 23:05:15  WARN ActiveMqMemoryLeakT

activemq pull request: Activemq 5.5.1.1

2014-01-16 Thread alexcojocaru
GitHub user alexcojocaru opened a pull request:

https://github.com/apache/activemq/pull/7

Activemq 5.5.1.1

Branched out of the 5.5.1 tag.

Adds a "stop" goal to the ActiveMQ Maven plugin for stopping the broker 
service on demand, requested by:
https://issues.apache.org/jira/browse/AMQ-3452
https://issues.apache.org/jira/browse/AMQ-4509

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexcojocaru/activemq activemq-5.5.1.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq/pull/7.patch


commit 1d295c323f9030f6bd95482a57387abb9657a273
Author: Hiram R. Chirino 
Date:   2011-10-11T21:19:36Z

[maven-release-plugin]  copy for tag activemq-5.5.1

git-svn-id: 
https://svn.apache.org/repos/asf/activemq/tags/activemq-5.5.1@1182093 
13f79535-47bb-0310-9956-ffa450edef68

commit 64d321fe52ebee49f6fd0cc781bc400905295d1f
Author: acojocaru 
Date:   2014-01-16T21:03:57Z

add goal to stop the ActiveMQ broker





[jira] [Commented] (AMQ-4509) activemq-maven-plugin should have a stop goal

2014-01-16 Thread Alex Cojocaru (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873967#comment-13873967
 ] 

Alex Cojocaru commented on AMQ-4509:


A fork of the ActiveMQ project in GitHub has support for stopping the ActiveMQ 
broker service through the 'stop' goal:
https://github.com/alexcojocaru/activemq/tree/activemq-5.5.1.1

> activemq-maven-plugin should have a stop goal
> -
>
> Key: AMQ-4509
> URL: https://issues.apache.org/jira/browse/AMQ-4509
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 5.5.1
> Environment: Maven 3.0.5, Java 6, Plugin configuration: 
> true
>Reporter: Tim Andersen
>Priority: Minor
>  Labels: activemq-maven-plugin, maven-activemq-plugin
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The maven-activemq-plugin (aka activemq-maven-plugin) in a multi-module Maven 
> project, we would like to stop and start ActiveMQ for each module where it is 
> needed (a "stop" and "start" goal, rather than a "run" goal with a shutdown 
> hook). We cannot run an individual module of our multi-module project because 
> we can only start ActiveMQ once for our aggregate pom.xml. This approach 
> would also resolve AMQ-1628 in a different way than was suggested. The 
> approach we are suggesting is similar to how the cargo plugin handles tomcat.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-3452) add 'stop' goal to the maven-activemq-plugin

2014-01-16 Thread Alex Cojocaru (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873966#comment-13873966
 ] 

Alex Cojocaru commented on AMQ-3452:


AMQ-4509 describes the same issue / request.
A fork of the ActiveMQ project in GitHub has support for stopping the ActiveMQ 
broker service through the 'stop' goal:
https://github.com/alexcojocaru/activemq/tree/activemq-5.5.1.1

> add 'stop' goal to the maven-activemq-plugin
> 
>
> Key: AMQ-3452
> URL: https://issues.apache.org/jira/browse/AMQ-3452
> Project: ActiveMQ
>  Issue Type: New Feature
>Reporter: Paulo Siqueira
>
> This would be useful in at least two scenarios. In one of them, we have a 
> hudson building our system, and somehow the shutdown hook never gets called.
> The second scenario is using the maven shell. If we don't stop activemq 
> explicitly, it will be running already in the next build, and cause an error 
> when trying to start a new one.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


activemq pull request: Remove hawt.io console and restore back to just the ...

2014-01-16 Thread dkulp
Github user dkulp closed the pull request at:

https://github.com/apache/activemq/pull/6



[jira] [Closed] (AMQ-4875) JMX: TotalMessageCount hast negative Value

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-4875.
-

Resolution: Incomplete

No test case or other information provided to any meaningful investigation.

> JMX: TotalMessageCount hast negative Value
> --
>
> Key: AMQ-4875
> URL: https://issues.apache.org/jira/browse/AMQ-4875
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JMX
>Affects Versions: 5.8.0
>Reporter: Andre Helberg
>Priority: Minor
>
> Hi,
> after purging the dead letter queue I see a negative value for the 
> JMX-attribute TotalMessageCount.
> I use JDBC Message Store.
> kind regards,
> Andre Helberg



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Closed] (AMQ-4928) The producer is closed

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-4928.
-

Resolution: Incomplete

No unit test provided to reproduce, missing client connection URI and broker or 
client logs. 

> The producer is closed 
> ---
>
> Key: AMQ-4928
> URL: https://issues.apache.org/jira/browse/AMQ-4928
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-pool
>Affects Versions: 5.9.0
>Reporter: Michael Wittig
>  Labels: pool, producer
>
> We see "The producer is closed exceptions" often after the system was in idle 
> mode for a few days/hours (weekend) and then the first requests of monday 
> start to arrive at the system. I am a bit confused because I thought that 
> producers are not pooled. Only connections and sessions are pooled?
> The spring config of the connection factory looks like this:
> {quote}
>  class="org.apache.activemq.pool.PooledConnectionFactory" 
> destroy-method="stop">
>   
>   
>   
>   
>   
>   
>/>
>/>
>   
>   
> {quote}
> The exception:
> {quote}
> Caused by: javax.jms.IllegalStateException: The producer is closed
> at 
> org.apache.activemq.ActiveMQMessageProducer.checkClosed(ActiveMQMessageProducer.java:196)
> at 
> org.apache.activemq.ActiveMQMessageProducerSupport.getDeliveryMode(ActiveMQMessageProducerSupport.java:148)
> at org.apache.activemq.jms.pool.PooledProducer.(PooledProducer.java:42)
> at 
> org.apache.activemq.jms.pool.PooledSession.createProducer(PooledSession.java:359)
> at 
> org.springframework.jms.core.JmsTemplate.doCreateProducer(JmsTemplate.java:971)
> at 
> org.springframework.jms.core.JmsTemplate.createProducer(JmsTemplate.java:952)
> at org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:563)
> at org.springframework.jms.core.JmsTemplate$3.doInJms(JmsTemplate.java:536)
> at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:466)
> ... 16 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (AMQCPP-530) SSL does not find hostname in cert with multiple cn's in dn

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved AMQCPP-530.
-

   Resolution: Fixed
Fix Version/s: 3.9.0
   3.8.3

Great work, I've applied the patch on trunk and the 3.8.x fixes branch

> SSL does not find hostname in cert with multiple cn's in dn
> ---
>
> Key: AMQCPP-530
> URL: https://issues.apache.org/jira/browse/AMQCPP-530
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf
>Affects Versions: 3.8.2
> Environment: unix
>Reporter: Jeffrey B
>Assignee: Timothy Bish
>Priority: Minor
>  Labels: ssl
> Fix For: 3.8.3, 3.9.0
>
> Attachments: OpenSSLSocket.cpp, unified-diff.txt
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SSL certs that we use contain multiple cn's in the dn, such as 
> dn="cn=%1, cn=hostname, cn=app, cn=project, ou=team, o=company, c=ww"
> I do not know why they are created in this way. It is probably something 
> legacy related. Anyway, with this ActiveMQ cpp will not find the hostname 
> from the dn and fail dual ssl authentication.
> Here is a page on openssl that states the specific limitation of the method 
> used in the code 
> http://www.openssl.org/docs/crypto/X509_NAME_get_index_by_NID.html
> And this link shows an example usage of the suggested method
> http://h71000.www7.hp.com/doc/83final/ba554_90007/rn02re186.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (AMQ-4976) Remove hawt.io console from official distro

2014-01-16 Thread Hadrian Zbarcea (JIRA)
Hadrian Zbarcea created AMQ-4976:


 Summary: Remove hawt.io console from official distro
 Key: AMQ-4976
 URL: https://issues.apache.org/jira/browse/AMQ-4976
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.9.0
Reporter: Hadrian Zbarcea
Assignee: Hadrian Zbarcea


Per discussion on the mailing list, hawt.io should not ship with the official 
activemq distro.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4782) Remove old webconsole from ActiveMQ 5.10 onwards

2014-01-16 Thread Hadrian Zbarcea (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873823#comment-13873823
 ] 

Hadrian Zbarcea commented on AMQ-4782:
--

Paul, there is no final agreement/decision yet. This issue is open for tracking 
purposes. I suspect a decision will be made shortly to avoid the confusion.

> Remove old webconsole from ActiveMQ 5.10 onwards
> 
>
> Key: AMQ-4782
> URL: https://issues.apache.org/jira/browse/AMQ-4782
> Project: ActiveMQ
>  Issue Type: Task
>Affects Versions: 5.10.0
>Reporter: Claus Ibsen
>Priority: Minor
>  Labels: web-console
> Fix For: 5.10.0
>
>
> The old aging web console is deprecated in ActiveMQ 5.9, and intended to be 
> removed from next release, eg 5.10.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQCPP-530) SSL does not find hostname in cert with multiple cn's in dn

2014-01-16 Thread Jeffrey B (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey B updated AMQCPP-530:
-

Attachment: unified-diff.txt

Here is a unified diff file, Is this a patch?

> SSL does not find hostname in cert with multiple cn's in dn
> ---
>
> Key: AMQCPP-530
> URL: https://issues.apache.org/jira/browse/AMQCPP-530
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf
>Affects Versions: 3.8.2
> Environment: unix
>Reporter: Jeffrey B
>Assignee: Timothy Bish
>Priority: Minor
>  Labels: ssl
> Attachments: OpenSSLSocket.cpp, unified-diff.txt
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SSL certs that we use contain multiple cn's in the dn, such as 
> dn="cn=%1, cn=hostname, cn=app, cn=project, ou=team, o=company, c=ww"
> I do not know why they are created in this way. It is probably something 
> legacy related. Anyway, with this ActiveMQ cpp will not find the hostname 
> from the dn and fail dual ssl authentication.
> Here is a page on openssl that states the specific limitation of the method 
> used in the code 
> http://www.openssl.org/docs/crypto/X509_NAME_get_index_by_NID.html
> And this link shows an example usage of the suggested method
> http://h71000.www7.hp.com/doc/83final/ba554_90007/rn02re186.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (AMQ-3758) Recover scheduler database option

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned AMQ-3758:
-

Assignee: Timothy Bish

> Recover scheduler database option
> -
>
> Key: AMQ-3758
> URL: https://issues.apache.org/jira/browse/AMQ-3758
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Message Store
>Affects Versions: 5.5.0
>Reporter: Sergei Sokolov
>Assignee: Timothy Bish
>  Labels: scheduler
> Fix For: 5.11.0
>
>
> I am not sure why, but Scheduler database got corrupted, and some messages 
> couldn't be delivered to a broker. I got many exceptions similar to:
> {code}
> 2012-03-02 03:26:08,234 | ERROR | JMS Failed to schedule job | 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl | JobScheduler:JMS 
> java.io.IOException: Could not locate data file \db-2.log 
> at org.apache.kahadb.journal.Journal.getDataFile(Journal.java:350) 
> at org.apache.kahadb.journal.Journal.read(Journal.java:597) 
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerStore.getPayload(JobSchedulerStore.java:315)
>  
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.fireJob(JobSchedulerImpl.java:421)
>  
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.mainLoop(JobSchedulerImpl.java:473)
>  
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.run(JobSchedulerImpl.java:429)
>  
> at java.lang.Thread.run(Unknown Source) 
> {code}
> The problem is that there is no way to restore the database like you can if 
> you are working with the main ActiveMQ database. You can fix the main 
> database by specifying the following configuration:
> {code}
>   
>  ignoreMissingJournalfiles="true" 
> checkForCorruptJournalFiles="true" 
> checksumJournalFiles="true" />  
> 
> {code}
> It would be nice to have the same feature for the scheduler database.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQ-3758) Recover scheduler database option

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQ-3758:
--

Fix Version/s: (was: 5.10.0)
   5.11.0

> Recover scheduler database option
> -
>
> Key: AMQ-3758
> URL: https://issues.apache.org/jira/browse/AMQ-3758
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Message Store
>Affects Versions: 5.5.0
>Reporter: Sergei Sokolov
>  Labels: scheduler
> Fix For: 5.11.0
>
>
> I am not sure why, but Scheduler database got corrupted, and some messages 
> couldn't be delivered to a broker. I got many exceptions similar to:
> {code}
> 2012-03-02 03:26:08,234 | ERROR | JMS Failed to schedule job | 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl | JobScheduler:JMS 
> java.io.IOException: Could not locate data file \db-2.log 
> at org.apache.kahadb.journal.Journal.getDataFile(Journal.java:350) 
> at org.apache.kahadb.journal.Journal.read(Journal.java:597) 
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerStore.getPayload(JobSchedulerStore.java:315)
>  
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.fireJob(JobSchedulerImpl.java:421)
>  
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.mainLoop(JobSchedulerImpl.java:473)
>  
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.run(JobSchedulerImpl.java:429)
>  
> at java.lang.Thread.run(Unknown Source) 
> {code}
> The problem is that there is no way to restore the database like you can if 
> you are working with the main ActiveMQ database. You can fix the main 
> database by specifying the following configuration:
> {code}
>   
>  ignoreMissingJournalfiles="true" 
> checkForCorruptJournalFiles="true" 
> checksumJournalFiles="true" />  
> 
> {code}
> It would be nice to have the same feature for the scheduler database.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQCPP-530) SSL does not find hostname in cert with multiple cn's in dn

2014-01-16 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQCPP-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873699#comment-13873699
 ] 

Timothy Bish commented on AMQCPP-530:
-

Could you provide a patch file so we can easily see what you've changed?

> SSL does not find hostname in cert with multiple cn's in dn
> ---
>
> Key: AMQCPP-530
> URL: https://issues.apache.org/jira/browse/AMQCPP-530
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf
>Affects Versions: 3.8.2
> Environment: unix
>Reporter: Jeffrey B
>Assignee: Timothy Bish
>Priority: Minor
>  Labels: ssl
> Attachments: OpenSSLSocket.cpp
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SSL certs that we use contain multiple cn's in the dn, such as 
> dn="cn=%1, cn=hostname, cn=app, cn=project, ou=team, o=company, c=ww"
> I do not know why they are created in this way. It is probably something 
> legacy related. Anyway, with this ActiveMQ cpp will not find the hostname 
> from the dn and fail dual ssl authentication.
> Here is a page on openssl that states the specific limitation of the method 
> used in the code 
> http://www.openssl.org/docs/crypto/X509_NAME_get_index_by_NID.html
> And this link shows an example usage of the suggested method
> http://h71000.www7.hp.com/doc/83final/ba554_90007/rn02re186.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQCPP-530) SSL does not find hostname in cert with multiple cn's in dn

2014-01-16 Thread Jeffrey B (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey B updated AMQCPP-530:
-

Summary: SSL does not find hostname in cert with multiple cn's in dn  (was: 
SSL does not find hostname in cert with multiple cn's in dc)

> SSL does not find hostname in cert with multiple cn's in dn
> ---
>
> Key: AMQCPP-530
> URL: https://issues.apache.org/jira/browse/AMQCPP-530
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf
>Affects Versions: 3.8.2
> Environment: unix
>Reporter: Jeffrey B
>Assignee: Timothy Bish
>Priority: Minor
>  Labels: ssl
> Attachments: OpenSSLSocket.cpp
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SSL certs that we use contain multiple cn's in the dn, such as 
> dn="cn=%1, cn=hostname, cn=app, cn=project, ou=team, o=company, c=ww"
> I do not know why they are created in this way. It is probably something 
> legacy related. Anyway, with this ActiveMQ cpp will not find the hostname 
> from the dn and fail dual ssl authentication.
> Here is a page on openssl that states the specific limitation of the method 
> used in the code 
> http://www.openssl.org/docs/crypto/X509_NAME_get_index_by_NID.html
> And this link shows an example usage of the suggested method
> http://h71000.www7.hp.com/doc/83final/ba554_90007/rn02re186.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQCPP-530) SSL does not find hostname in cert with multiple cn's in dc

2014-01-16 Thread Jeffrey B (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey B updated AMQCPP-530:
-

Attachment: OpenSSLSocket.cpp

This file is decaf/internal/net/ssl/openssl/OpenSSLSocket.cpp
I have made a small change at the end to resolve the stated issue using a 
different openssl method

> SSL does not find hostname in cert with multiple cn's in dc
> ---
>
> Key: AMQCPP-530
> URL: https://issues.apache.org/jira/browse/AMQCPP-530
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf
>Affects Versions: 3.8.2
> Environment: unix
>Reporter: Jeffrey B
>Assignee: Timothy Bish
>Priority: Minor
>  Labels: ssl
> Attachments: OpenSSLSocket.cpp
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SSL certs that we use contain multiple cn's in the dn, such as 
> dn="cn=%1, cn=hostname, cn=app, cn=project, ou=team, o=company, c=ww"
> I do not know why they are created in this way. It is probably something 
> legacy related. Anyway, with this ActiveMQ cpp will not find the hostname 
> from the dn and fail dual ssl authentication.
> Here is a page on openssl that states the specific limitation of the method 
> used in the code 
> http://www.openssl.org/docs/crypto/X509_NAME_get_index_by_NID.html
> And this link shows an example usage of the suggested method
> http://h71000.www7.hp.com/doc/83final/ba554_90007/rn02re186.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (AMQCPP-530) SSL does not find hostname in cert with multiple cn's in dc

2014-01-16 Thread Jeffrey B (JIRA)
Jeffrey B created AMQCPP-530:


 Summary: SSL does not find hostname in cert with multiple cn's in 
dc
 Key: AMQCPP-530
 URL: https://issues.apache.org/jira/browse/AMQCPP-530
 Project: ActiveMQ C++ Client
  Issue Type: Bug
  Components: Decaf
Affects Versions: 3.8.2
 Environment: unix
Reporter: Jeffrey B
Assignee: Timothy Bish
Priority: Minor


The SSL certs that we use contain multiple cn's in the dn, such as 
dn="cn=%1, cn=hostname, cn=app, cn=project, ou=team, o=company, c=ww"

I do not know why they are created in this way. It is probably something legacy 
related. Anyway, with this ActiveMQ cpp will not find the hostname from the dn 
and fail dual ssl authentication.

Here is a page on openssl that states the specific limitation of the method 
used in the code 
http://www.openssl.org/docs/crypto/X509_NAME_get_index_by_NID.html

And this link shows an example usage of the suggested method
http://h71000.www7.hp.com/doc/83final/ba554_90007/rn02re186.html





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQ-3421) Deadlock when queue fills up

2014-01-16 Thread Rob Davies (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Davies updated AMQ-3421:


 Priority: Major  (was: Critical)
Fix Version/s: NEEDS_REVIEWED

This is probably fixed in 5.6 - but without a junit test case its difficult to 
know.

> Deadlock when queue fills up
> 
>
> Key: AMQ-3421
> URL: https://issues.apache.org/jira/browse/AMQ-3421
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.5.0
> Environment: Tomcat 6.0.29, Spring 3.0.5, Oracle Java 6, Centos 5
>Reporter: Robert Elliot
> Fix For: NEEDS_REVIEWED
>
> Attachments: JStack Output.rtf, jmsMessageQueues.xml
>
>
> We are running a queue to do asynch audit updates, configured via Spring 
> 3.0.5.
> When the queue fills up Tomcat locks up with all catalina threads waiting on 
> an object monitor in Spring.  This object monitor is held by the "ActiveMQ 
> Connection Executor: vm://localhost#2986" thread which is itself blocked for 
> ever awaiting the stopped CountDownLatch at 
> TransportConnection.stop(TransportConnection.java:930).
> There are no "ActiveMQ Task" threads running, which suggests that either the 
> task created by stopAsync has completed or did not run.
> A code review leaves us baffled as to how this latch cannot have counted 
> down, but it hasn't.  Could Tomcat possibly be silently discarding the thread 
> that was meant to do the stop without throwing an exception?! It seems 
> unlikely but (as I understand it) TaskRunnerFactory is breaking the Servlet 
> spec by running up its own Threads.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (AMQ-4837) LevelDB corrupted when in a replication cluster

2014-01-16 Thread Hiram Chirino (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiram Chirino resolved AMQ-4837.


   Resolution: Fixed
Fix Version/s: 5.10.0

Marking issue as resolved since the fix was confirmed by Guillaume. Thx!

> LevelDB corrupted when in a replication cluster
> ---
>
> Key: AMQ-4837
> URL: https://issues.apache.org/jira/browse/AMQ-4837
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0
> Environment: CentOS, Linux version 2.6.32-71.29.1.el6.x86_64
> java-1.7.0-openjdk.x86_64/java-1.6.0-openjdk.x86_64
> zookeeper-3.4.5.2
>Reporter: Guillaume
>Assignee: Hiram Chirino
>Priority: Critical
> Fix For: 5.10.0
>
> Attachments: LevelDBCorrupted.zip, activemq.xml
>
>
> I have clustered 3 ActiveMQ instances using replicated leveldb and zookeeper. 
> When performing some tests using Web UI, I can across issues that appears to 
> corrupt the leveldb data files.
> The issue can be replicated by performing the following steps:
> 1.Start 3 activemq nodes.
> 2.Push a message to the master (Node1) and browse the queue using the web 
> UI
> 3.Stop master node (Node1)
> 4.Push a message to the new master (Node2) and browse the queue using the 
> web UI. Message summary and queue content ok.
> 5.Start Node1
> 6.Stop master node (Node2)
> 7.Browse the queue using the web UI on new master (Node3). Message 
> summary ok however when clicking on the queue, no message details. An error 
> (see below) is logged by the master, which attempts a restart.
> From this point, the database appears to be corrupted and the same error 
> occurs to each node infinitely (shutdown/restart). The only way around is to 
> stop the nodes and clear the data files.
> However when a message is pushed between step 5 and 6, the error doesn’t 
> occur.
> =
> Leveldb configuration on the 3 instances:
>   
>  directory="${activemq.data}/leveldb"
>   replicas="3"
>   bind="tcp://0.0.0.0:0"
>   zkAddress="zkserver:2181"
>   zkPath="/activemq/leveldb-stores"
>   />
>   
> =
> The error is:
> INFO | Stopping BrokerService[localhost] due to exception, java.io.IOException
> java.io.IOException
> at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:543)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:974)
> at 
> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1270)
> at 
> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1194)
> at 
> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:708)
>at 
> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:741)
> at 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:106)
> at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:258)
> at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:108)
> at 
> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
> at 
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1875)
> at 
> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2086)
> at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1581)
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1198)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1194)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(Le

logging from activemq leveldb

2014-01-16 Thread kal123
I don't see logging from LevelDBClient.java such as:

 def queueCursor(collectionKey: Long, seq:Long)(func: (Message)=>Boolean) =
{
collectionCursor(collectionKey, encodeLong(seq)) { (key, value) =>
  val seq = decodeLong(key)
  info("Seq read: %L", seq)
  var locator = DataLocator(store, value.getValueLocation,
value.getValueLength)
  info("locator read: %s", locator.toString())


Is there something i have to setup to get these logging?




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/logging-from-activemq-leveldb-tp4676384.html
Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.


[jira] [Commented] (AMQ-4782) Remove old webconsole from ActiveMQ 5.10 onwards

2014-01-16 Thread Paul Gale (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873608#comment-13873608
 ] 

Paul Gale commented on AMQ-4782:


As it looks like Hawt.IO is going to be removed from either 5.9.1 or 5.10 is 
this task still necessary? I think it would be wise to leave the old web 
console in place than ship with no console.

> Remove old webconsole from ActiveMQ 5.10 onwards
> 
>
> Key: AMQ-4782
> URL: https://issues.apache.org/jira/browse/AMQ-4782
> Project: ActiveMQ
>  Issue Type: Task
>Affects Versions: 5.10.0
>Reporter: Claus Ibsen
>Priority: Minor
>  Labels: web-console
> Fix For: 5.10.0
>
>
> The old aging web console is deprecated in ActiveMQ 5.9, and intended to be 
> removed from next release, eg 5.10.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: [DISCUSS] Remove the old ActiveMQ Console

2014-01-16 Thread Hadrian Zbarcea
I am +1 on moving the console into a subproject. That way it could have 
a life of its own, independent of the server side.


I am -1 on making the console optional in the distro. I am more in favor 
of having a minimal (no console and maybe other features) and a full 
distro including the console like other projects do and like Claus I 
think suggested earlier in this thread. Most of the users I know 
actively use the console and they expect it in a full distro. I don't 
see any compelling reason to change that.


My $0.01,
Hadrian


On 01/13/2014 05:08 PM, Daniel Kulp wrote:


On Jan 13, 2014, at 10:34 AM, Robert Davies  wrote:


This discussion seems to have slowed/stopped. Although I don’t think there’s a 
consensus - it seems moving the old console to a sub-project and making the 
install optional from the distribution will cover most concerns raised. Unless 
there’s objections - I’d like to suggest we make this happen asap and get a new 
ActiveMQ release
out - unless we need to vote ?



As someone who’s had to struggle to install things behind corporate firewalls 
and networks without internet connectivity and such on several occasions, I’d 
certainly prefer an “activemq-all” distribution or something that would be 
fully complete.   Those “no internet” situations always annoy me when I have 
some optional thing that I really need at that moment.   (yea, I admit, usually 
comes down to poor planning on my part)

Dan




thanks,

Rob

On 9 Jan 2014, at 05:09, Matt Pavlovich  wrote:


+1

On Jan 8, 2014, at 10:02 AM, Hiram Chirino  wrote:


+1

On Wed, Jan 8, 2014 at 4:20 AM, Dejan Bosanac  wrote:

+1 from me as well. We have Jetty in and it should be easy to hot-deploy
any war folks want to use for the web part of the broker. So we can exclude
current web demos as well (which already don't start by default), then
rework them and allow people to install them on demand. This will allow us
to have much leaner broker installation.

Regards
--
Dejan Bosanac
--
Red Hat, Inc.
FuseSource is now part of Red Hat
dbosa...@redhat.com
Twitter: @dejanb
Blog: http://sensatic.net
ActiveMQ in Action: http://www.manning.com/snyder/


On Wed, Jan 8, 2014 at 5:01 AM, Robert Davies  wrote:


I agree, this seems like the best approach so far.

On 7 Jan 2014, at 23:27, Christian Posta 
wrote:


+1 @ Claus, Jim, and Tim's thread of the discussion.

Moving the console to a subproject separates the code out enough and
makes it "less intimidating" to those in the community that would like
to approach it and contribute. Then have one distro that's "headless"
with the option of using whatever console one wanted, including quick
drop in of the old console. Could even distribute a script that goes
out, d/l the old console and installs it on demand as one sees fit (as
james mentioned).



On Tue, Jan 7, 2014 at 2:28 PM, Timothy Bish 

wrote:

On 01/06/2014 03:06 AM, Claus Ibsen wrote:


Hi

I think the old web console should be moved into a sub-project of
ActiveMQ.
Other ASF projects like Felix [1], Karaf [2], etc does this with their
web-consoles.

That may also make it easier for people to contribute to the
web-console as a sub-project if there codebase is smaller, and not
contains the entire ActiveMQ source code. That may spark a little more
life into the old web-console so people can help maintain it.

For the standalone ActiveMQ distribution, then installing the old web
console should be an easy step, such as unzipping a .zip file, or
copying a .war / .jar or something to a directory, and allowing to
editing a configuration file to configure the console (port / context
path / or other configurations). Then other 3rd party consoles could
have the *same* installation procedure, so there is even
playing-field.

For the embedded ActiveMQ distribution for SMX/Karaf users, its
already easy to install the console, as its just like any other
installation using a feature. This is the same for other 3rd party
consoles, and thus there is already an even playing field.




[1] -


http://felix.apache.org/documentation/subprojects/apache-felix-web-console.html

[2] - http://karaf.apache.org/index/subprojects/webconsole.html


On Thu, Jan 2, 2014 at 10:59 AM, Robert Davies 
wrote:


The old/original console is no longer fit for purpose, it is hard to
maintain, the source of a lot of security issues [1] over the last few
years.

There is another thread about using hawtio as the console going

forward,

and without going into all the gory details it is probably likely

that there

may be no web console shipped at all in future releases of ActiveMQ.

The JMX

naming hierarchy was improved for ActiveMQ 5.8, such that its easy to

view

the running status of an ActiveMQ broker from 3rd party tools such as
jconsole, visualvm or hawtio. Regardless of the outcome of the other
discussion [2] - It doesn’t help the ActiveMQ project to try and

maintain a

static web console any more.

I propose we remove the old web console from th

[jira] [Commented] (AMQ-4556) Store based cursor "forgets" messages

2014-01-16 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873555#comment-13873555
 ] 

Timothy Bish commented on AMQ-4556:
---

Reducing this to 'major' since its against an older broker release and we have 
no test case to check against the latest release or SNAPSHOT builds. 

> Store based cursor "forgets" messages
> -
>
> Key: AMQ-4556
> URL: https://issues.apache.org/jira/browse/AMQ-4556
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.7.0, 5.8.0
> Environment: Durable queues with JDBC persistent store in a 
> master/slave configuration with NIO transport
>Reporter: SunGard Global Services Germany
> Attachments: ActiveMQClient.java, ActiveMQClient2.java, activemq.xml
>
>
> This issue seems to relate to AMQ-2009 and the referenced articles there.
> h3. The issue:
> After a give period of time, the broker seems to "forget" about received 
> messages on a queue.
> Receiving of new messages and delivering them is unaffected. Just some are 
> lost in the store.
> Checking the DB tables, the missing messages do exists there.
> ActiveMQ statistics counters are also aware of them and count those messages 
> as "pending" in the ActiveMQ console (first colummn). But open the details of 
> a queue in ActiveMQ console do not show them at all.
> h3. Analysis so far
> We tried several settings for the queues which had no impact on the issue:
> * Activating/Deactivating useCache
> * Setting different prefetch values from 0, 1, 100 (at least 1 seems to relax 
> the issue a little bit)
> * KahaDB and JDBC persistent (Oracle and MSSQL(jTDS)) are affected
> Also some characteristics about affected/not affected queues might help to 
> analyse this further:
> * We have one queue with just one producer and many consumers whereas the 
> message is quite small (just headers): No problem on that queue even after 
> thousend of messages
> * Queues with multiple producers and consumers and payloaded text-messages 
> seem to be unaffected - Maybe the JDBC persistent store trottels the 
> processing enough to "solve" the issue
> * Queues with multiple producers and consumers and small messages (just 
> headers) seem to "enable" the issue. Even with few messages the issue appears
> h3. "Recovering" lost messages
> Shutdown the (master) broker and restart it. With failover transport this 
> happens transparent for the clients.
> On master startup, all messages from the store (DB) are scanned and 
> "rediscovered"
> h3. Workaround
> Use another cursor - VM seems to be fine. Additional settings might be 
> required to handle shotcommings of the VM cursor (max memory, queue memory 
> etc.)
> {code:xml}
> 
>   
> 
>   
> 
> {code}
> h3. Test code
> I was not able to create a self-contained unit test.
> But the attached code can be used to reproduce the issue quite reliable.
> It sends 3000 messages. WIth the following settings it will fail/will work 
> correctly:
> || Message Size || ActiveMQ cursor || Result ||
> | 1 MB (leave line 20 as it is) | vmQueueCursor | (/) - All messages 
> delivered|
> | 1 MB (leave line 20 as it is) | store based Cursor| (/) - All messages 
> delivered|
> | remove comment from line 20 | vmQueueCursor| (/) - All messages delivered|
> | remove comment from line 20 | store based Cursor| (x) - Some (~0,5% 
> messages lost)|
> For completness also the used activemq-broker.xml was attached - This is the 
> default one with the JDBC persistent store added.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQ-4556) Store based cursor "forgets" messages

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQ-4556:
--

Priority: Major  (was: Critical)

> Store based cursor "forgets" messages
> -
>
> Key: AMQ-4556
> URL: https://issues.apache.org/jira/browse/AMQ-4556
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.7.0, 5.8.0
> Environment: Durable queues with JDBC persistent store in a 
> master/slave configuration with NIO transport
>Reporter: SunGard Global Services Germany
> Attachments: ActiveMQClient.java, ActiveMQClient2.java, activemq.xml
>
>
> This issue seems to relate to AMQ-2009 and the referenced articles there.
> h3. The issue:
> After a give period of time, the broker seems to "forget" about received 
> messages on a queue.
> Receiving of new messages and delivering them is unaffected. Just some are 
> lost in the store.
> Checking the DB tables, the missing messages do exists there.
> ActiveMQ statistics counters are also aware of them and count those messages 
> as "pending" in the ActiveMQ console (first colummn). But open the details of 
> a queue in ActiveMQ console do not show them at all.
> h3. Analysis so far
> We tried several settings for the queues which had no impact on the issue:
> * Activating/Deactivating useCache
> * Setting different prefetch values from 0, 1, 100 (at least 1 seems to relax 
> the issue a little bit)
> * KahaDB and JDBC persistent (Oracle and MSSQL(jTDS)) are affected
> Also some characteristics about affected/not affected queues might help to 
> analyse this further:
> * We have one queue with just one producer and many consumers whereas the 
> message is quite small (just headers): No problem on that queue even after 
> thousend of messages
> * Queues with multiple producers and consumers and payloaded text-messages 
> seem to be unaffected - Maybe the JDBC persistent store trottels the 
> processing enough to "solve" the issue
> * Queues with multiple producers and consumers and small messages (just 
> headers) seem to "enable" the issue. Even with few messages the issue appears
> h3. "Recovering" lost messages
> Shutdown the (master) broker and restart it. With failover transport this 
> happens transparent for the clients.
> On master startup, all messages from the store (DB) are scanned and 
> "rediscovered"
> h3. Workaround
> Use another cursor - VM seems to be fine. Additional settings might be 
> required to handle shotcommings of the VM cursor (max memory, queue memory 
> etc.)
> {code:xml}
> 
>   
> 
>   
> 
> {code}
> h3. Test code
> I was not able to create a self-contained unit test.
> But the attached code can be used to reproduce the issue quite reliable.
> It sends 3000 messages. WIth the following settings it will fail/will work 
> correctly:
> || Message Size || ActiveMQ cursor || Result ||
> | 1 MB (leave line 20 as it is) | vmQueueCursor | (/) - All messages 
> delivered|
> | 1 MB (leave line 20 as it is) | store based Cursor| (/) - All messages 
> delivered|
> | remove comment from line 20 | vmQueueCursor| (/) - All messages delivered|
> | remove comment from line 20 | store based Cursor| (x) - Some (~0,5% 
> messages lost)|
> For completness also the used activemq-broker.xml was attached - This is the 
> default one with the JDBC persistent store added.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Closed] (AMQ-4105) InactivityIOException exception leading to ServiceMix not functioning

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-4105.
-

Resolution: Incomplete

No further feedback from the reporter with logs or other details.  

> InactivityIOException exception leading to ServiceMix not functioning
> -
>
> Key: AMQ-4105
> URL: https://issues.apache.org/jira/browse/AMQ-4105
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.4.2
> Environment: OS: SunOS 5.10 Generic_147440-23 sun4v sparc 
> SUNW,Netra-T5440
> HW: Sun sparc Netra T5440
> ActiveMq version: 5.4.2
> ServiceMix version: 4.3.0
> Java version: 1.5.0_32
>Reporter: Mithun Sunku
>Priority: Critical
> Fix For: AGING_TO_DIE
>
> Attachments: servicemix.log.zip
>
>
> We are using ActiveMQ broker with ServiceMix and have observed following 
> InactivityIOException being reported in ServiceMix logs and ActiveMq closes 
> MessageProducer and MessageComsumer. Then JMS related 
> activemq.ConnectionFailedExceptions are observed and the ServiceMix stops 
> functioning and clients are not able to register for topics and get any data 
> from ServiceMix.
> Transport failed: org.apache.activemq.transport.InactivityIOException: 
> Channel was inactive for too (>3) long: /127.0.0.1:44650 
> Based on the information provided in forum, we have tried to disable 
> Inactivity Monitor in the ActiveMq-broker.xml file available in 
> ServiceMix/etc directory, however Inactivity monitor is not changed from 
> default value of 3. 
> ActiveMq-broker.xml from our setup:
> 
> 
> 
>   uri="tcp://localhost:61616?wireFormat.maxInactivityDuration=0"/>
>  
>
> 
> Please let us know what has caused InactivityTimeout exception and how to 
> reproduce this issue and how this issue will be resolved.
> ServiceMix Logs:
> [2012-10-09 03:26:24,964] | INFO  | InactivityMonitor Async Task: 
> java.util.concurrent.ThreadPoolExecutor$Worker@49754b | Transport 
>| emq.broker.TransportConnection  238 | Transport failed: 
> org.apache.activemq.transport.InactivityIOException: Channel was inactive for 
> too (>3) long: /127.0.0.1:44650
> [2012-10-09 03:26:25,306] | INFO  | ActiveMQ Transport: 
> tcp:///127.0.0.1:44705 | Transport| 
> emq.broker.TransportConnection  238 | Transport failed: java.io.EOFException
> [2012-10-09 03:26:26,443] | WARN  | 
> pool-component.servicemix-wsn2005.provider-thread-36 | JmsPublisher   
>   | ervicemix.wsn.jms.JmsPublisher   97 | Error dispatching message
> javax.jms.IllegalStateException: The producer is closed
>   at 
> org.apache.activemq.ActiveMQMessageProducer.checkClosed(ActiveMQMessageProducer.java:169)
>   at 
> org.apache.activemq.ActiveMQMessageProducerSupport.getDeliveryMode(ActiveMQMessageProducerSupport.java:148)
>   at 
> org.apache.activemq.pool.PooledProducer.(PooledProducer.java:44)
>   at 
> org.apache.activemq.pool.PooledSession.createProducer(PooledSession.java:278)
>   at 
> org.apache.servicemix.wsn.jms.JmsPublisher.notify(JmsPublisher.java:89)[166:servicemix-wsn2005:2011.01.0]
>   at 
> org.apache.servicemix.wsn.AbstractNotificationBroker.handleNotify(AbstractNotificationBroker.java:134)[166:servicemix-wsn2005:2011.01.0]
>   at 
> org.apache.servicemix.wsn.AbstractNotificationBroker.notify(AbstractNotificationBroker.java:126)[166:servicemix-wsn2005:2011.01.0]
>   at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
> Source)[:1.6.0_33]
>   at java.lang.reflect.Method.invoke(Unknown Source)[:1.6.0_33]
>   at 
> org.apache.servicemix.wsn.component.WSNEndpoint.process(WSNEndpoint.java:166)[166:servicemix-wsn2005:2011.01.0]
>   at 
> org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:651)[121:servicemix-common:2011.01.0]
>   at 
> org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:606)[121:servicemix-common:2011.01.0]
>   at 
> org.apache.servicemix.common.AsyncBaseLifeCycle.processExchangeInTx(AsyncBaseLifeCycle.java:501)[121:servicemix-common:2011.01.0]
>   at 
> org.apache.servicemix.common.AsyncBaseLifeCycle$2.run(AsyncBaseLifeCycle.java:370)[121:servicemix-common:2011.01.0]
>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
> Source)[:1.6.0_33]
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source)[:1.6.0_33]
>   at java.lang.Thread.run(Unknown Source)[:1.6.0_33]
> [2012-10-09 03:26:26,631] | INFO  | 
> pool-component.servicemix-wsn2005.provider-thread-37 | JmsPullPoint   
>   | ervicemix.wsn.jms.JmsPullPoint  125 | Error retrieving messages
> javax.

[jira] [Updated] (AMQ-4970) Deletion of a queue inaffective across broker restart

2014-01-16 Thread Arthur Naseef (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arthur Naseef updated AMQ-4970:
---

Attachment: AMQ4970Test.zip

Updated AMQ4970Test.zip so it now tests both the 
ActiveMQConnection.destroyDestination() and BrokerView.deleteQueue() methods.

Still not reproducing the problem this way.

Tried another approach (capturing kahadb snapshots):

== part 1 ==
* Clear the kahadb
* Start the broker
* Create two destinations
* Stop the broker
* Take snapshot 1 of the kahadb
* Start the broker
* Remove one destination
* Stop the broker
* Take snapshot 2 of the kahadb
* Start the broker
* Confirm errant operation (removed destination recreated)

== part 2 ==
* Stop the broker
* Clear the kahadb
* Start the broker
* Create 2 destinations
* Remove 1 destination
* Stop the broker
* Capture snapshot 3 of the kahadb
* Start the broker
* Confirm correct operation (destination remains removed)

Comparing Snapshots 2 and 3, there are differences (as expected):

* The db.redo file contains the removed destination name in the errant process 
path.
* The db.redo file in the normal path does not contain the removed destination 
name.
* And the normal path has a db.free file.
* The errant path does not.

I'm wondering if something in the out-of-the-box configs differing from the 
internal broker in the AMQ4970Test is contributing to the problem.

Any ideas would be welcome.

> Deletion of a queue inaffective across broker restart
> -
>
> Key: AMQ-4970
> URL: https://issues.apache.org/jira/browse/AMQ-4970
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: mac osx/mavericks
>Reporter: Arthur Naseef
> Attachments: AMQ4970Test.zip, AMQ4970Test.zip
>
>
> Deleting a queue, it is revived from persistent store after a broker restart. 
>  The following steps reproduce the problem:
> * Create a queue (confirmed using the REST client I/F)
> * Shutdown the broker
> * Startup the broker
> * Confirm queue still exists via the hawtio ui (correct operation so far)
> * Delete the queue
> * Confirm queue removed via the hawtio ui
> * Shutdown the broker
> * Startup the broker
> * Confirm queue was not recreated via hawtio ui (failed: queue still exists)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQ-3432) Slave broker able to start together with Master broker (shared file system master/slave setup)

2014-01-16 Thread Rob Davies (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Davies updated AMQ-3432:


Priority: Major  (was: Critical)

Looks like distributed lock not being honoured by the file system  - need 
information on the shared file system to progress this.

> Slave broker able to start together with Master broker (shared file system 
> master/slave setup)
> --
>
> Key: AMQ-3432
> URL: https://issues.apache.org/jira/browse/AMQ-3432
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.5.0
> Environment: Windows 2003, Weblogic Application Server 
>Reporter: PK Tan
>
> When deploying activemq server in shared file system master/slave setup on a 
> cluster of 2 servers without any of the MQ data files (journal folder, 
> kr-store folder and lock file) [i.e. a fresh state], the slave broker is able 
> to startup along with the master server. Both are active as verified by 
> telnetting to the servers at the activemq port .  
> If you were to restart both servers, both will not able to obtain the lock. 
> If you stop both servers, delete the lock file and restart both the servers, 
> we get back to the initial state where both servers are started and running 
> (i.e. telnettable).
> This issue might be related to AMQ-3273.  I'm logging another case for it 
> because the environment and the situation is different . 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4974) Remove NetworkConnectionsCleanedupTest?

2014-01-16 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873515#comment-13873515
 ] 

Gary Tully commented on AMQ-4974:
-

yep, just chuck it.

> Remove NetworkConnectionsCleanedupTest?
> ---
>
> Key: AMQ-4974
> URL: https://issues.apache.org/jira/browse/AMQ-4974
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Test Cases
>Reporter: Kevin Earls
>Priority: Minor
>
> This test contains the comment:  
>  // skip this test. it runs for an hour, doesn't assert anything, and could 
> probably
> // just be removed (seems like a throwaway impl for 
> https://issues.apache.org/activemq/browse/AMQ-1202)
> I've added an @Ignore so it won't count as a failure on CI systems.  Should I 
> just remove it?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4837) LevelDB corrupted when in a replication cluster

2014-01-16 Thread Rob Davies (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873508#comment-13873508
 ] 

Rob Davies commented on AMQ-4837:
-

is this resolved?

> LevelDB corrupted when in a replication cluster
> ---
>
> Key: AMQ-4837
> URL: https://issues.apache.org/jira/browse/AMQ-4837
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0
> Environment: CentOS, Linux version 2.6.32-71.29.1.el6.x86_64
> java-1.7.0-openjdk.x86_64/java-1.6.0-openjdk.x86_64
> zookeeper-3.4.5.2
>Reporter: Guillaume
>Assignee: Hiram Chirino
>Priority: Critical
> Attachments: LevelDBCorrupted.zip, activemq.xml
>
>
> I have clustered 3 ActiveMQ instances using replicated leveldb and zookeeper. 
> When performing some tests using Web UI, I can across issues that appears to 
> corrupt the leveldb data files.
> The issue can be replicated by performing the following steps:
> 1.Start 3 activemq nodes.
> 2.Push a message to the master (Node1) and browse the queue using the web 
> UI
> 3.Stop master node (Node1)
> 4.Push a message to the new master (Node2) and browse the queue using the 
> web UI. Message summary and queue content ok.
> 5.Start Node1
> 6.Stop master node (Node2)
> 7.Browse the queue using the web UI on new master (Node3). Message 
> summary ok however when clicking on the queue, no message details. An error 
> (see below) is logged by the master, which attempts a restart.
> From this point, the database appears to be corrupted and the same error 
> occurs to each node infinitely (shutdown/restart). The only way around is to 
> stop the nodes and clear the data files.
> However when a message is pushed between step 5 and 6, the error doesn’t 
> occur.
> =
> Leveldb configuration on the 3 instances:
>   
>  directory="${activemq.data}/leveldb"
>   replicas="3"
>   bind="tcp://0.0.0.0:0"
>   zkAddress="zkserver:2181"
>   zkPath="/activemq/leveldb-stores"
>   />
>   
> =
> The error is:
> INFO | Stopping BrokerService[localhost] due to exception, java.io.IOException
> java.io.IOException
> at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:543)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:974)
> at 
> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1270)
> at 
> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1194)
> at 
> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:708)
>at 
> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:741)
> at 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:106)
> at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:258)
> at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:108)
> at 
> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
> at 
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1875)
> at 
> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2086)
> at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1581)
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1198)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1194)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(LevelDBClient.scala:1272)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun

[jira] [Resolved] (AMQ-4938) Queue Messages lost after read timeout on REST API.

2014-01-16 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved AMQ-4938.
---

Resolution: Fixed

Patch applied with some minor code cleanup.  

> Queue Messages lost after read timeout on REST API.
> ---
>
> Key: AMQ-4938
> URL: https://issues.apache.org/jira/browse/AMQ-4938
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.8.0, 5.9.0, 5.10.0
> Environment: Win32, Linux
>Reporter: Peter Eisenlohr
>Priority: Critical
> Attachments: AMQ-4938.patch, AMQ-4938B.patch
>
>
> I have been trying to send/receive messages via a Queue using the [REST 
> API|http://activemq.apache.org/rest.html]. While testing I found that some 
> messages got lost after a consuming request times out when no message is 
> available.
> Here is a transcript of the test case I used:
> {code}
> #
> # OK: send first, consume later
> #
> $ curl -d "body=message" "http://localhost:8161/api/message/TEST?type=queue";
> Message sent
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=1000";
> message
> #
> # OK: start consuming, then send (within timeout)
> #
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000"&;
> [1] 5172
> $ curl -d "body=message" "http://localhost:8161/api/message/TEST?type=queue";
> messageMessage sent[1]+  Fertig  wget --no-http-keep-alive -q 
> -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000";
> #
> # NOK: start consuming, wait for timeout, then send and consume again
> #
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000";
> $ curl -d "body=message" "http://localhost:8161/api/message/TEST?type=queue";
> Message sent
> $ wget --no-http-keep-alive -q -O - 
> "http://localhost:8161/api/message/TEST?type=queue&clientId=GETID&readTimeout=5000";
> {code}
> The last *wget* returns after the given read timeout without any message. 
> When looking at the managament console, the message has been consumed.
> I tested this with 5.8.0 on linux as well as with 5.8.0, 5.9.0 and a freshly 
> built 5.10.0 on windows.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQ-4955) TCP Connections and related thread leak.

2014-01-16 Thread Rob Davies (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Davies updated AMQ-4955:


Priority: Major  (was: Critical)

Its not clear this is a broker problem - and not a application problem - 
downgrading to major

> TCP Connections and related thread leak.
> 
>
> Key: AMQ-4955
> URL: https://issues.apache.org/jira/browse/AMQ-4955
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0
> Environment: Windows 2008 R2, Jdk 1.7.40
>Reporter: Murali Mogalayapalli
> Attachments: TCP-Connection.jpg, ThreadStack.jpg, activemq.xml
>
>
> TCP Connections and related thread leak.
> Scenario
> Active MQ version 5.8
> NMS Client version 1.6
> OS - Windows 2008 R2
> JDK - 1.7.x
> activemq.xml is attached
> If a client connectivity gets lost between the time the initial socket is 
> created and the exchange of the wire format, the active MQ server's Client's 
> server thread gets blocked in socket read hanging out the TCP connection and 
> the related thread
> Here are the steps to recreate
> 1. Configure the Active MQ server with the activemq.xml attached.
> 2. Start the client in a debugger and have a break point at a place in such a 
> way that the client can be disconnected after the socket is established.
> 3. Once the breakpoint is hit, disconnect the client machine from the network
> 4. Kill the client- This basically simulates a situation where the socket 
> tear down packets are not reached the active mq server.
> 5. Open the JConsole. Look for the hanging TCP connection and the related 
> thread.
> Is there an configurable option in Active MQ to sweep and close the 
> connections, on regular interval, that still didn't finish the wire protocol 
> negotiation?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (AMQ-4975) DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest.testSendReceive fails intermittently

2014-01-16 Thread Kevin Earls (JIRA)
Kevin Earls created AMQ-4975:


 Summary: 
DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest.testSendReceive fails 
intermittently
 Key: AMQ-4975
 URL: https://issues.apache.org/jira/browse/AMQ-4975
 Project: ActiveMQ
  Issue Type: Bug
  Components: Test Cases
Reporter: Kevin Earls
Priority: Minor


This test fails intermittently with the error below.  It typical fails at 
around message 180-185, where it looks like it receives the same message twice.

(This test is defined in JmsSendReceiveTestSupport.  I'll add an overridden 
no-op version in DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest for now so it 
doesn't cause CI builds to fail)


---
 T E S T S
---
Running 
org.apache.activemq.broker.ft.DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 70.702 sec <<< 
FAILURE! - in 
org.apache.activemq.broker.ft.DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest
testSendReceive(org.apache.activemq.broker.ft.DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest)
  Time elapsed: 18.286 sec  <<< FAILURE!
junit.framework.ComparisonFailure: Message: 181 expected: but was:
at junit.framework.Assert.assertEquals(Assert.java:100)
at junit.framework.TestCase.assertEquals(TestCase.java:261)
at 
org.apache.activemq.JmsSendReceiveTestSupport.assertMessagesReceivedAreValid(JmsSendReceiveTestSupport.java:165)
at 
org.apache.activemq.JmsSendReceiveTestSupport.assertMessagesAreReceived(JmsSendReceiveTestSupport.java:128)
at 
org.apache.activemq.JmsSendReceiveTestSupport.testSendReceive(JmsSendReceiveTestSupport.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at 
org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:107)
at 
org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:113)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Results :

Failed tests: 
  
DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest>CombinationTestSupport.runBare:113->CombinationTestSupport.runBare:107->JmsSendReceiveTestSupport.testSendReceive:104->JmsSendReceiveTestSupport.assertMessagesAreReceived:128->JmsSendReceiveTestSupport.assertMessagesReceivedAreValid:165
 Message: 181 expected: but 
was:

Tests run: 4, Failures: 1, Errors: 0, Skipped: 0






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (AMQ-4974) Remove NetworkConnectionsCleanedupTest?

2014-01-16 Thread Kevin Earls (JIRA)
Kevin Earls created AMQ-4974:


 Summary: Remove NetworkConnectionsCleanedupTest?
 Key: AMQ-4974
 URL: https://issues.apache.org/jira/browse/AMQ-4974
 Project: ActiveMQ
  Issue Type: Bug
  Components: Test Cases
Reporter: Kevin Earls
Priority: Minor


This test contains the comment:  

 // skip this test. it runs for an hour, doesn't assert anything, and could 
probably
// just be removed (seems like a throwaway impl for 
https://issues.apache.org/activemq/browse/AMQ-1202)

I've added an @Ignore so it won't count as a failure on CI systems.  Should I 
just remove it?




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (AMQ-4973) UnreliableUdpTransportTest and MulticastTransportTest have test failures

2014-01-16 Thread Kevin Earls (JIRA)
Kevin Earls created AMQ-4973:


 Summary: UnreliableUdpTransportTest and MulticastTransportTest 
have test failures
 Key: AMQ-4973
 URL: https://issues.apache.org/jira/browse/AMQ-4973
 Project: ActiveMQ
  Issue Type: Bug
  Components: Test Cases
Reporter: Kevin Earls
Priority: Minor


The testSendingMediumMessage and testSendingLargeMessage test cases fail for 
both of these as shown below.  

UnreliableUdpTransportTest uses 
org.apache.activemq.transport.reliable.ReliableTransport, which is deprecated.  
Should we continue to run these tests?

---
 T E S T S
---
Running org.apache.activemq.transport.multicast.MulticastTransportTest
Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 40.6 sec <<< 
FAILURE! - in org.apache.activemq.transport.multicast.MulticastTransportTest
testSendingMediumMessage(org.apache.activemq.transport.multicast.MulticastTransportTest)
  Time elapsed: 40.402 sec  <<< FAILURE!
junit.framework.AssertionFailedError: Should have received a Command by now!
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.assertTrue(Assert.java:22)
at junit.framework.Assert.assertNotNull(Assert.java:256)
at junit.framework.TestCase.assertNotNull(TestCase.java:426)
at 
org.apache.activemq.transport.udp.UdpTestSupport.assertCommandReceived(UdpTestSupport.java:257)
at 
org.apache.activemq.transport.udp.UdpTestSupport.assertSendTextMessage(UdpTestSupport.java:112)
at 
org.apache.activemq.transport.udp.UdpTestSupport.testSendingMediumMessage(UdpTestSupport.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

testSendingLargeMessage(org.apache.activemq.transport.multicast.MulticastTransportTest)
  Time elapsed: 0.009 sec  <<< FAILURE!
junit.framework.AssertionFailedError: Failed to send to transport: 
java.net.SocketException: Socket is closed
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.TestCase.fail(TestCase.java:227)
at 
org.apache.activemq.transport.udp.UdpTestSupport.assertSendTextMessage(UdpTestSupport.java:123)
at 
org.apache.activemq.transport.udp.UdpTestSupport.testSendingLargeMessage(UdpTestSupport.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.j

[jira] [Created] (AMQ-4972) FailoverConsumerTest.testPublisherFailsOver is failing

2014-01-16 Thread Kevin Earls (JIRA)
Kevin Earls created AMQ-4972:


 Summary: FailoverConsumerTest.testPublisherFailsOver is failing
 Key: AMQ-4972
 URL: https://issues.apache.org/jira/browse/AMQ-4972
 Project: ActiveMQ
  Issue Type: Bug
Reporter: Kevin Earls
Assignee: Kevin Earls
Priority: Minor


I get the following error for this test:

Running org.apache.activemq.transport.failover.FailoverConsumerTest
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.936 sec <<< 
FAILURE! - in org.apache.activemq.transport.failover.FailoverConsumerTest
testPublisherFailsOver(org.apache.activemq.transport.failover.FailoverConsumerTest)
  Time elapsed: 5.479 sec  <<< FAILURE!
junit.framework.AssertionFailedError: expected:<1> but was:<0>
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.failNotEquals(Assert.java:329)
at junit.framework.Assert.assertEquals(Assert.java:78)
at junit.framework.Assert.assertEquals(Assert.java:234)
at junit.framework.Assert.assertEquals(Assert.java:241)
at junit.framework.TestCase.assertEquals(TestCase.java:409)
at 
org.apache.activemq.transport.failover.FailoverConsumerTest.testPublisherFailsOver(FailoverConsumerTest.java:121)


Results :

Failed tests: 
  
FailoverConsumerTest>CombinationTestSupport.runBare:113->CombinationTestSupport.runBare:107->testPublisherFailsOver:121
 expected:<1> but was:<0>

Tests run: 1, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD FAILURE
[INFO] 





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4971) OOM in DemandForwardingBridge

2014-01-16 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873301#comment-13873301
 ] 

Gary Tully commented on AMQ-4971:
-

the prefetch should limit the number of messages pending an ack at one time. 

try{code}{code}


> OOM in DemandForwardingBridge
> -
>
> Key: AMQ-4971
> URL: https://issues.apache.org/jira/browse/AMQ-4971
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
>Reporter: Yuriy Sidelnikov
>  Labels: features
> Attachments: AMQ-4971.patch
>
>
> DemandForwardingBridge sends messages to the other broker and asynchronously 
> waits for ACKs keeping message bodies in heap. Amount of un-ACK-ed messages 
> kept in heap is not limited. If local producer is fast then whole heap will 
> be consumed by messages waiting to be ACK-ed by other broker.
> Possible option to fix the issue:
> Don't wait for ACK from other broker when forwarding the message if some 
> threshold of un-ACK-ed messages is reached.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (AMQ-4971) OOM in DemandForwardingBridge

2014-01-16 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873247#comment-13873247
 ] 

Nikolay Martynov edited comment on AMQ-4971 at 1/16/14 10:51 AM:
-

The issue can be easily reproduced in the following scenario:
1. Start 3 brokers. Let it be 0, 1, 2
2. In brokers 1 and 2 enable conduit duplex network connector to 0
3. Connect tcp:// producer for topic A to 0 (jmeter)
4. Connect vm:// consumer for topic A to 1 (simple jms message forwarder in 
camel: from jms, to jms)
5. Connect vm:// producer for topic B to 1 (simple jms message forwarder in 
camel: from jms, to jms)
6. Connect vm:// consumer for topic B to 2 (simple jms message forwarder in 
camel: from jms, to jms)
7. Connect vm:// producer for topic C to 2 (simple jms message forwarder in 
camel: from jms, to jms)
8. Connect tcp:// consumer for topic C to 0 (jmeter)
In summary, we just put bi-directional load to each of network bridges where 
each bridge has to both send and receive messages and acks.
Since vm:// and camel embedded in the same JVM are faster than tcp:// and peer 
broker, bridge very quickly fills heap with responseCallback's used to handle 
ack and reference message body. This isnt controlled by broker memory limits or 
producer flow control and bridges die on OOM.

Additionally, VMTransport will also keep references to asynchronously handled 
messages. Since vm:// is faster than tcp://, this is another source of quick 
OOM where producer flow control and memory limits have no any effect. While 
queue is bounded, default size is 2000 and with 1MB messages you quickly run 
out of heap (doc doesnt seem to explain if and how it could be adjusted). 
Wouldnt be a problem if it could be controlled by broker memory limits and 
producer flow control.


was (Author: nickolay_martinov):
The issue can be easily reproduced in the following scenario:
1. Start 3 brokers. Let it be 0, 1, 2
2. In brokers 1 and 2 enable conduit duplex network connector to 0
3. Connect tcp:// producer for topic A to 0 (jmeter)
4. Connect vm:// consumer for topic A to 1 (simple jms message forwarder in 
camel: from jms, to jms)
5. Connect vm:// producer for topic B to 1 (simple jms message forwarder in 
camel: from jms, to jms)
6. Connect vm:// consumer for topic B to 2 (simple jms message forwarder in 
camel: from jms, to jms)
7. Connect vm:// producer for topic C to 2 (simple jms message forwarder in 
camel: from jms, to jms)
8. Connect tcp:// consumer for topic C to 0 (jmeter)
In summary, we just put bi-directional load to each of network bridges where 
each bridge has to both send and receive messages and acks.
Since vm:// and camel embedded in the same JVM are faster than tcp:// and peer 
broker, bridge very quickly fills heap with responseCallback's used to handle 
ack and reference message body. This isnt controlled by broker memory limits or 
producer flow control and bridges die on OOM.

Additionally, VMTransport will also keep references to asynchronously handles 
messages. Since vm:// is faster than tcp://, this is another source of quick 
OOM where producer flow control and memory limits have no any effect. While 
queue is bounded, default size is 2000 and with 1MB messages you quickly run 
out of heap (doc doesnt seem to explain if and how it could be adjusted). 
Wouldnt be a problem if it could be controlled by broker memory limits and 
producer flow control.

> OOM in DemandForwardingBridge
> -
>
> Key: AMQ-4971
> URL: https://issues.apache.org/jira/browse/AMQ-4971
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
>Reporter: Yuriy Sidelnikov
>  Labels: features
> Attachments: AMQ-4971.patch
>
>
> DemandForwardingBridge sends messages to the other broker and asynchronously 
> waits for ACKs keeping message bodies in heap. Amount of un-ACK-ed messages 
> kept in heap is not limited. If local producer is fast then whole heap will 
> be consumed by messages waiting to be ACK-ed by other broker.
> Possible option to fix the issue:
> Don't wait for ACK from other broker when forwarding the message if some 
> threshold of un-ACK-ed messages is reached.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (AMQ-4971) OOM in DemandForwardingBridge

2014-01-16 Thread Nikolay Martynov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Martynov updated AMQ-4971:
--

Attachment: AMQ-4971.patch

Attached patch with workaround for those who face OOM in network bridge. This 
is hardly a solution (might be lower vm performance on small messages, might be 
less delivery guarantees over bridges) but at least it allows JVM not to crash 
with OOM.

> OOM in DemandForwardingBridge
> -
>
> Key: AMQ-4971
> URL: https://issues.apache.org/jira/browse/AMQ-4971
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
>Reporter: Yuriy Sidelnikov
>  Labels: features
> Attachments: AMQ-4971.patch
>
>
> DemandForwardingBridge sends messages to the other broker and asynchronously 
> waits for ACKs keeping message bodies in heap. Amount of un-ACK-ed messages 
> kept in heap is not limited. If local producer is fast then whole heap will 
> be consumed by messages waiting to be ACK-ed by other broker.
> Possible option to fix the issue:
> Don't wait for ACK from other broker when forwarding the message if some 
> threshold of un-ACK-ed messages is reached.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4971) OOM in DemandForwardingBridge

2014-01-16 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873247#comment-13873247
 ] 

Nikolay Martynov commented on AMQ-4971:
---

The issue can be easily reproduced in the following scenario:
1. Start 3 brokers. Let it be 0, 1, 2
2. In brokers 1 and 2 enable conduit duplex network connector to 0
3. Connect tcp:// producer for topic A to 0 (jmeter)
4. Connect vm:// consumer for topic A to 1 (simple jms message forwarder in 
camel: from jms, to jms)
5. Connect vm:// producer for topic B to 1 (simple jms message forwarder in 
camel: from jms, to jms)
6. Connect vm:// consumer for topic B to 2 (simple jms message forwarder in 
camel: from jms, to jms)
7. Connect vm:// producer for topic C to 2 (simple jms message forwarder in 
camel: from jms, to jms)
8. Connect tcp:// consumer for topic C to 0 (jmeter)
In summary, we just put bi-directional load to each of network bridges where 
each bridge has to both send and receive messages and acks.
Since vm:// and camel embedded in the same JVM are faster than tcp:// and peer 
broker, bridge very quickly fills heap with responseCallback's used to handle 
ack and reference message body. This isnt controlled by broker memory limits or 
producer flow control and bridges die on OOM.

Additionally, VMTransport will also keep references to asynchronously handles 
messages. Since vm:// is faster than tcp://, this is another source of quick 
OOM where producer flow control and memory limits have no any effect. While 
queue is bounded, default size is 2000 and with 1MB messages you quickly run 
out of heap (doc doesnt seem to explain if and how it could be adjusted). 
Wouldnt be a problem if it could be controlled by broker memory limits and 
producer flow control.

> OOM in DemandForwardingBridge
> -
>
> Key: AMQ-4971
> URL: https://issues.apache.org/jira/browse/AMQ-4971
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
>Reporter: Yuriy Sidelnikov
>  Labels: features
>
> DemandForwardingBridge sends messages to the other broker and asynchronously 
> waits for ACKs keeping message bodies in heap. Amount of un-ACK-ed messages 
> kept in heap is not limited. If local producer is fast then whole heap will 
> be consumed by messages waiting to be ACK-ed by other broker.
> Possible option to fix the issue:
> Don't wait for ACK from other broker when forwarding the message if some 
> threshold of un-ACK-ed messages is reached.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)