[jira] [Commented] (AMQ-5145) JMS Failed to schedule job Chunk stream does not exist

2014-04-23 Thread Rajeev (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979287#comment-13979287
 ] 

Rajeev commented on AMQ-5145:
-

Hi

Can anybody look into the issue raised?

Thanks,
Rajeev

> JMS Failed to schedule job  Chunk stream does not exist
> ---
>
> Key: AMQ-5145
> URL: https://issues.apache.org/jira/browse/AMQ-5145
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: RedHat Linux 5.8
>Reporter: Rajeev
>
> We are getting the below exception in our production environment. It seems 
> that there are some issue with Scheduler DB.
> JMS Failed to schedule job | 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl | JobScheduler:JMS
> java.io.EOFException: Chunk stream does not exist, page: 9536 is marked free
> at org.apache.kahadb.page.Transaction$2.readPage(Transaction.java:460)
> at org.apache.kahadb.page.Transaction$2.(Transaction.java:437)
> at 
> org.apache.kahadb.page.Transaction.openInputStream(Transaction.java:434)
> at org.apache.kahadb.page.Transaction.load(Transaction.java:410)
> at org.apache.kahadb.page.Transaction.load(Transaction.java:367)
> at org.apache.kahadb.index.BTreeIndex.loadNode(BTreeIndex.java:262)
> at org.apache.kahadb.index.BTreeNode.getChild(BTreeNode.java:225)
> at org.apache.kahadb.index.BTreeNode.getFirst(BTreeNode.java:600)
> at org.apache.kahadb.index.BTreeIndex.getFirst(BTreeIndex.java:240)
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.getNextToSchedule(JobSchedulerImpl.java:410)
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.mainLoop(JobSchedulerImpl.java:523)
> at 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.run(JobSchedulerImpl.java:429)
> at java.lang.Thread.run(Thread.java:662)
> Please help to solve this issue. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2014-04-23 Thread Kevin McLaughlin (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979145#comment-13979145
 ] 

Kevin McLaughlin commented on AMQ-5082:
---

The 7 step process above requires a step 0.  That is, have a rogue process 
listening on 61619 on a single host in your cluster.  After stopping that, I 
can't reproduce easily anymore.  I'm not sure if ActiveMQ could check ports 
that it requires at startup or not, even when it is not going to bind to them 
unless it becomes the master.

> ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
> ---
>
> Key: AMQ-5082
> URL: https://issues.apache.org/jira/browse/AMQ-5082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0, 5.10.0
>Reporter: Scott Feldstein
>Priority: Critical
> Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
> mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
> zookeeper.out-cluster.failure
>
>
> I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
> persistence adapter.
> {code}
> 
>directory="${activemq.data}/leveldb"
>   replicas="3"
>   bind="tcp://0.0.0.0:0"
>   zkAddress="zookeep0:2181"
>   zkPath="/activemq/leveldb-stores"/>
> 
> {code}
> After about a day or so of sitting idle there are cascading failures and the 
> cluster completely stops listening all together.
> I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
> 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
> the three mq nodes and the zookeeper logs that reflect the time where the 
> cluster starts having issues.
> The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
> The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
> issue.
> If you need more data it should be pretty easy to get whatever is needed 
> since it is consistently reproducible.
> This bug may be related to AMQ-5026, but looks different enough to file a 
> separate issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2014-04-23 Thread Kevin McLaughlin (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979002#comment-13979002
 ] 

Kevin McLaughlin commented on AMQ-5082:
---

This actually seems pretty easy to reproduce without network errors or slow IO.

# Setup 3 node ActiveMQ cluster
# Setup 3 node ZK cluster on same hosts
# Start ActiveMQ and ZK on each host
# Determine which ActiveMQ process is the master (telnet 61616 on each host)
# Look in ActiveMQ log of the master, determine which ZK it is connected to via 
DEBUG log
{{Got ping response for sessionid: ... after 1ms | 
org.apache.zookeeper.ClientCnxn | main-SendThread($host:2181)}}
# Restart that ZK process.
# Most of the time the ActiveMQ cluster does not recover/is dead at this point. 
 If it does recover go to #4.  I haven't made it past rolling two ZK processes.

> ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
> ---
>
> Key: AMQ-5082
> URL: https://issues.apache.org/jira/browse/AMQ-5082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0, 5.10.0
>Reporter: Scott Feldstein
>Priority: Critical
> Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
> mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
> zookeeper.out-cluster.failure
>
>
> I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
> persistence adapter.
> {code}
> 
>directory="${activemq.data}/leveldb"
>   replicas="3"
>   bind="tcp://0.0.0.0:0"
>   zkAddress="zookeep0:2181"
>   zkPath="/activemq/leveldb-stores"/>
> 
> {code}
> After about a day or so of sitting idle there are cascading failures and the 
> cluster completely stops listening all together.
> I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
> 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
> the three mq nodes and the zookeeper logs that reflect the time where the 
> cluster starts having issues.
> The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
> The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
> issue.
> If you need more data it should be pretty easy to get whatever is needed 
> since it is consistently reproducible.
> This bug may be related to AMQ-5026, but looks different enough to file a 
> separate issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5160) MQTT retained messages bypass Authentication / Authorization

2014-04-23 Thread Surf (JIRA)
Surf created AMQ-5160:
-

 Summary: MQTT retained messages bypass Authentication / 
Authorization
 Key: AMQ-5160
 URL: https://issues.apache.org/jira/browse/AMQ-5160
 Project: ActiveMQ
  Issue Type: Bug
  Components: MQTT
Affects Versions: 5.9.1
Reporter: Surf
Priority: Critical


I am using MQTT on AMQ 5.9.1
After latest MQTT hardening from [~dhirajsb] , there is an issue of MQTT 
retained messages.

Simple case:
Set Authentication / Authorization for two different TOPICS.
Send retained message to one topic.

Try to subscribe "#" with other second user.
It will show retained messages published by TOPIC 1. 

here i have attached test configurations.





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (AMQ-5160) MQTT retained messages bypass Authentication / Authorization

2014-04-23 Thread Surf (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surf updated AMQ-5160:
--

Attachment: users.properties
login.config
groups.properties
activemq.xml

> MQTT retained messages bypass Authentication / Authorization
> 
>
> Key: AMQ-5160
> URL: https://issues.apache.org/jira/browse/AMQ-5160
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: MQTT
>Affects Versions: 5.9.1
>Reporter: Surf
>Priority: Critical
>  Labels: authentication, authorization, mqtt, security
> Attachments: activemq.xml, groups.properties, login.config, 
> users.properties
>
>
> I am using MQTT on AMQ 5.9.1
> After latest MQTT hardening from [~dhirajsb] , there is an issue of MQTT 
> retained messages.
> Simple case:
> Set Authentication / Authorization for two different TOPICS.
> Send retained message to one topic.
> Try to subscribe "#" with other second user.
> It will show retained messages published by TOPIC 1. 
> here i have attached test configurations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (AMQ-5159) STOMP browse gets null pointer exception if ACK mode is not AUTO

2014-04-23 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved AMQ-5159.
---

Resolution: Fixed

Fixed on trunk

> STOMP browse gets null pointer exception if ACK mode is not AUTO
> 
>
> Key: AMQ-5159
> URL: https://issues.apache.org/jira/browse/AMQ-5159
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: stomp
>Affects Versions: 5.9.0, 5.9.1
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.10.0
>
>
> If a stomp client subscribes as a Queue Browser but doesn't request the auto 
> acknowledge mode then a NullPointerException is triggered when the end of 
> Browse message is sent.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5159) STOMP browse gets null pointer exception if ACK mode is not AUTO

2014-04-23 Thread Timothy Bish (JIRA)
Timothy Bish created AMQ-5159:
-

 Summary: STOMP browse gets null pointer exception if ACK mode is 
not AUTO
 Key: AMQ-5159
 URL: https://issues.apache.org/jira/browse/AMQ-5159
 Project: ActiveMQ
  Issue Type: Bug
  Components: stomp
Affects Versions: 5.9.1, 5.9.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 5.10.0


If a stomp client subscribes as a Queue Browser but doesn't request the auto 
acknowledge mode then a NullPointerException is triggered when the end of 
Browse message is sent.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5077) Improve performance of ConcurrentStoreAndDispatch

2014-04-23 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13978299#comment-13978299
 ] 

Gary Tully commented on AMQ-5077:
-

[~rwagg] I added a concurrentSend option to the composite destination and this 
reduces the latency because the writes can be batched.

{code}{code}

Note, concurrentStoreAndDispatch gets in the way when there are no consumers so 
for the graph/test[1] it is disabled. 

With concurrentStoreAndDispatch enabled, the pending write queue can get some 
depth which will allow fast consumers to negate the write.

changes  http://git-wip-us.apache.org/repos/asf/activemq/commit/08bb172f

[1] http://www.chartgo.com/get.do?id=7f99485050

> Improve performance of ConcurrentStoreAndDispatch
> -
>
> Key: AMQ-5077
> URL: https://issues.apache.org/jira/browse/AMQ-5077
> Project: ActiveMQ
>  Issue Type: Wish
>  Components: Message Store
>Affects Versions: 5.9.0
> Environment: 5.9.0.redhat-610343
>Reporter: Jason Shepherd
>Assignee: Gary Tully
> Attachments: Test combinations.xlsx, compDesPerf.tar.gz, 
> topicRouting.zip
>
>
> We have publishers publishing to a topic which has 5 topic -> queue routings, 
> and gets a max message rate attainable of ~833 messages/sec, with each 
> message around 5k in size.
> To test this i set up a JMS config with topic queues:
> Topic
> TopicRouted.1
> ...
> TopicRouted.11
> Each topic has an increasing number of routings to queues, and a client is 
> set up to subscribe to all the queues.
> Rough message rates:
> routings messages/sec
> 0 2500
> 1 1428
> 2 2000
> 3 1428
> 4 
> 5 833
> This occurs whether the broker config has producerFlowControl="false" set to 
> true or false , and KahaDB disk synching is turned off. We also tried 
> experimenting with concurrentStoreAndDispatch, but that didn't seem to help. 
> LevelDB didn't give any notable performance improvement either.
> We also have asyncSend enabled on the producer, and have a requirement to use 
> persistent messages. We have also experimented with sending messages in a 
> transaction, but that hasn't really helped.
> It seems like producer throughput rate across all queue destinations, all 
> connections and all publisher machines is limited by something on the broker, 
> through a mechanism which is not producer flow control. I think the prime 
> suspect is still contention on the index.
> We did some test with Yourkit profiler.
> Profiler was attached to broker at startup, allowed to run and then a topic 
> publisher was started, routing to 5 queues. 
> Profiler statistics were reset, the publisher allowed to run for 60 seconds, 
> and then profiling snapshot was taken. During that time, ~9600 messages were 
> logged as being sent for a rate of ~160/sec.
> This ties in roughly with the invocation counts recorded in the snapshot (i 
> think) - ~43k calls. 
> From what i can work out, in the snapshot (filtering everything but 
> org.apache.activemq.store.kahadb), 
> For the 60 second sample period, 
> 24.8 seconds elapsed in 
> org.apache.activemq.store.kahadb.KahaDbTransactionStore$1.removeAsyncMessage(ConnectionContext,
>  MessageAck).
> 18.3 seconds elapsed in 
> org.apache.activemq.store.kahadb.KahaDbTransactionStore$1.asyncAddQueueMessage(ConnectionContext,
>  Message, boolean).
> From these, a further large portion of the time is spent inside 
> MessageDatabase:
> org.apache.activemq.store.kahadb.MessageDatabase.process(KahaRemoveMessageCommand,
>  Location) - 10 secs elapsed
> org.apache.activemq.store.kahadb.MessageDatabase.process(KahaAddMessageCommand,
>  Location) - 8.5 secs elapsed.
> As both of these lock on indexLock.writeLock(), and both take place on the 
> NIO transport threads, i think this accounts for at least some of the message 
> throughput limits. As messages are added and removed from the index one by 
> one, regardless of sync type settings, this adds a fair amount of overhead. 
> While we're not synchronising on writes to disk, we are performing work on 
> the NIO worker thread which can block on locks, and could account for the 
> behaviour we've seen client side. 
> To Reproduce:
> 1. Install a broker and use the attached configuration.
> 2. Use the 5.8.0 example ant script to consume from the queues, 
> TopicQueueRouted.1 - 5. eg:
>ant consumer -Durl=tcp://localhost:61616 -Dsubject=TopicQueueRouted.1 
> -Duser=admin -Dpassword=admin -Dmax=-1
> 3. Use the modified version of 5.8.0 example ant script (attached) to send 
> messages to topics, TopicRouted.1 - 5, eg:
>ant producer 
> -Durl='tcp://localhost:61616?jms.useAsyncSend=true&wireFormat.tightEncodingEnabled=false&keepAlive=true&wireFormat.maxInactivityDuration=6&socketBufferSize=32768'
>  -Dsubject=TopicRouted.1 -Duser=admin -Dpassword=admin -D

[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2014-04-23 Thread Kevin McLaughlin (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13978219#comment-13978219
 ] 

Kevin McLaughlin commented on AMQ-5082:
---

Still exists in 5.9.1.

> ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
> ---
>
> Key: AMQ-5082
> URL: https://issues.apache.org/jira/browse/AMQ-5082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0, 5.10.0
>Reporter: Scott Feldstein
>Priority: Critical
> Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
> mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
> zookeeper.out-cluster.failure
>
>
> I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
> persistence adapter.
> {code}
> 
>directory="${activemq.data}/leveldb"
>   replicas="3"
>   bind="tcp://0.0.0.0:0"
>   zkAddress="zookeep0:2181"
>   zkPath="/activemq/leveldb-stores"/>
> 
> {code}
> After about a day or so of sitting idle there are cascading failures and the 
> cluster completely stops listening all together.
> I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
> 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
> the three mq nodes and the zookeeper logs that reflect the time where the 
> cluster starts having issues.
> The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
> The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
> issue.
> If you need more data it should be pretty easy to get whatever is needed 
> since it is consistently reproducible.
> This bug may be related to AMQ-5026, but looks different enough to file a 
> separate issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5158) Non persistent Messages not getting expired

2014-04-23 Thread Anuj Khandelwal (JIRA)
Anuj Khandelwal created AMQ-5158:


 Summary: Non persistent Messages not getting expired
 Key: AMQ-5158
 URL: https://issues.apache.org/jira/browse/AMQ-5158
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Anuj Khandelwal


It is coming from 
http://activemq.2283324.n4.nabble.com/Non-persistent-Messages-Not-getting-expired-even-after-expiration-time-exceeded-td4680428.html#a4680459
 

Problem: Non-persistent messages, if off lined to tmp storage (may be because 
of inactive durable subscriber), won’t be expired until they are scheduled for 
dispatch. 

Test scenario: Non persistent message is sent from the producer to the topic 
which has inactive durable subscriber, this message will be stored in 
non-persistent message tmp store "activemq-data/broker/tmpstorage/". 
This message is not getting deleted even after Expiration time is exceeded. 

According to discussion on ActiveMQ user forum, it will only be expired  when 
the message is ready to dispatch. Which should not happen.
Ideally broker should expire the message if expiration time exceeds 
irrespective of message is ready to dispatch or other things.



Thanks,
Anuj



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5157) Non persistent Messages not getting expired

2014-04-23 Thread Anuj Khandelwal (JIRA)
Anuj Khandelwal created AMQ-5157:


 Summary: Non persistent Messages not getting expired
 Key: AMQ-5157
 URL: https://issues.apache.org/jira/browse/AMQ-5157
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Anuj Khandelwal


It is coming from 
http://activemq.2283324.n4.nabble.com/Non-persistent-Messages-Not-getting-expired-even-after-expiration-time-exceeded-td4680428.html#a4680459
 

Problem: Non-persistent messages, if off lined to tmp storage (may be because 
of inactive durable subscriber), won’t be expired until they are scheduled for 
dispatch. 

Test scenario: Non persistent message is sent from the producer to the topic 
which has inactive durable subscriber, this message will be stored in 
non-persistent message tmp store "activemq-data/broker/tmpstorage/". 
This message is not getting deleted even after Expiration time is exceeded. 

According to discussion on ActiveMQ user forum, it will only be expired  when 
the message is ready to dispatch. Which should not happen.
Ideally broker should expire the message if expiration time exceeds 
irrespective of message is ready to dispatch or other things.



Thanks,
Anuj



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5156) Multiple tests using durable subs are failing

2014-04-23 Thread Kevin Earls (JIRA)
Kevin Earls created AMQ-5156:


 Summary: Multiple tests using durable subs are failing
 Key: AMQ-5156
 URL: https://issues.apache.org/jira/browse/AMQ-5156
 Project: ActiveMQ
  Issue Type: Bug
Reporter: Kevin Earls


I'll look into this further, but the following tests have recently started 
failing:

org.apache.activemq.bugs.AMQ4413Test.testDurableSubMessageLoss
org.apache.activemq.bugs.DurableConsumerTest.testConcurrentDurableConsumer 
{useDedicatedTaskRunner=true}
org.apache.activemq.bugs.DurableConsumerTest.testConcurrentDurableConsumer 
{useDedicatedTaskRunner=false}
org.apache.activemq.store.jdbc.JDBCMessagePriorityTest.testDurableSubsReconnect 
{dispatchAsync=false, useCache=true, prefetchVal=1000}
org.apache.activemq.store.jdbc.JDBCMessagePriorityTest.testDurableSubsReconnect 
{dispatchAsync=false, useCache=false, prefetchVal=1000}
org.apache.activemq.store.kahadb.KahaDBMessagePriorityTest.testDurableSubsReconnect
 {dispatchAsync=false, useCache=true, prefetchVal=1000}
org.apache.activemq.store.kahadb.KahaDBMessagePriorityTest.testDurableSubsReconnect
 {dispatchAsync=false, useCache=false, prefetchVal=1000}
org.apache.activemq.usecases.DurableConsumerCloseAndReconnectTcpTest.testDurableSubscriberReconnectMultipleTimes
org.apache.activemq.usecases.DurableConsumerCloseAndReconnectTest.testDurableSubscriberReconnectMultipleTimes
org.apache.activemq.usecases.DurableSubscriptionOfflineTest.testOrderOnActivateDeactivate



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (AMQ-4182) Memory Leak for ActiveMQBytesMessage with Compression as true

2014-04-23 Thread Dejan Bosanac (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dejan Bosanac resolved AMQ-4182.


   Resolution: Fixed
Fix Version/s: 5.10.0
 Assignee: Dejan Bosanac

I implemented explicit compression/decompression of the payload when we 
initialise writing and reading operations.

http://git-wip-us.apache.org/repos/asf/activemq/commit/44bb9fbe

With this we can avoid using finalize() to close the streams, which can cause 
memory leaks. The trade-off is that we keep uncompressed messages in memory, 
but IMHO that's not an issue as compression is meant to be used on the wire and 
the similar approach is used for other kind of messages.

There's still couple of more places where we should remove finalize()

> Memory Leak for ActiveMQBytesMessage with Compression as true
> -
>
> Key: AMQ-4182
> URL: https://issues.apache.org/jira/browse/AMQ-4182
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JMS client
>Affects Versions: 5.5.1
> Environment: Linux(Redhat 5.5), Windows 7
>Reporter: Jeff Huang
>Assignee: Dejan Bosanac
>Priority: Minor
> Fix For: 5.10.0
>
>
> InflaterInputStream is supposed to close explicitly to release resource 
> allocated by its JNI methods. In ActiveMQBytesMessage, dataIn property is 
> disposed simply without closing it, which results in some weird memory leak 
> that can't be detected from heap size. It can't be controlled by -Xmx or 
> -XX:MaxDirectMemorySize.
> Please run the following test program to verify the issue:
> import java.util.concurrent.TimeUnit;
> import javax.jms.BytesMessage;
> import javax.jms.Connection;
> import javax.jms.Session;
> import org.apache.activemq.ActiveMQConnectionFactory;
> import org.apache.activemq.command.ActiveMQBytesMessage;
> /**
>  * A simple test to verify memory leak in ActiveMQBytesMessage.
>  */
> public class Main
> {
> public static void main(String[] args) throws Exception 
> {
> ActiveMQConnectionFactory connFactory = new 
> ActiveMQConnectionFactory("vm://localhost");
> connFactory.setUseCompression(true);
> Connection conn = connFactory.createConnection();
> Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
> BytesMessage message = session.createBytesMessage();
> 
> message.writeBytes(new byte[1024]);
> ActiveMQBytesMessage m = (ActiveMQBytesMessage)message;
> if(!m.isCompressed())
> {
> throw new RuntimeException();
> }
> 
> 
> while (true)
> {
> for (int k = 0; k < 1024; ++k)
> {
> message.reset();
> byte[] data = new byte[1024];
> message.readBytes(data);
> }
> TimeUnit.MILLISECONDS.sleep(10);
> }
> }
> }



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5155) Heartbeat fails in STOMP over WebSockets

2014-04-23 Thread Arjan van den Berg (JIRA)
Arjan van den Berg created AMQ-5155:
---

 Summary: Heartbeat fails in STOMP over WebSockets
 Key: AMQ-5155
 URL: https://issues.apache.org/jira/browse/AMQ-5155
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.9.1
Reporter: Arjan van den Berg
Priority: Minor


>From AMQ-4740:
I receive the following error after establishing a connection and heartbeat 
through stomp.js. This seems to occur after the 'PING' is sent.
-- stomp.js output
<<< CONNECTED
heart-beat:1,1
session:ID:localhost.localdomain-45596-1396530920609-2:2
server:ActiveMQ/5.10-SNAPSHOT
version:1.1
send PING every 1ms 
check PONG every 1ms 
<<< PONG 
>>> PING 
did not receive server activity for the last 20005ms 
Whoops! Lost connection to ws://172.16.99.73:61614/stomp
- activemq console ---
WARN | Transport Connection to: StompSocket_19548821 failed: java.io.IOException
Exception in thread "ActiveMQ InactivityMonitor Worker" 
java.lang.NullPointerException
at 
org.apache.activemq.transport.AbstractInactivityMonitor.onException(AbstractInactivityMonitor.java:314)
at 
org.apache.activemq.transport.AbstractInactivityMonitor$4.run(AbstractInactivityMonitor.java:215)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
WARN | Transport Connection to: StompSocket_19548821 failed: java.io.IOException
Exception in thread "ActiveMQ InactivityMonitor Worker" 
java.lang.NullPointerException
at 
org.apache.activemq.transport.AbstractInactivityMonitor.onException(AbstractInactivityMonitor.java:314)
at 
org.apache.activemq.transport.AbstractInactivityMonitor$4.run(AbstractInactivityMonitor.java:215)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

For me it looks as if the StompInactivityMonitor is delivering its events to 
the wrong Transport, i.e. it needs a "narrow()" when setting it up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (AMQ-5155) Heartbeat fails in STOMP over WebSockets

2014-04-23 Thread Arjan van den Berg (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arjan van den Berg updated AMQ-5155:


Affects Version/s: 5.10.0

> Heartbeat fails in STOMP over WebSockets
> 
>
> Key: AMQ-5155
> URL: https://issues.apache.org/jira/browse/AMQ-5155
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.1, 5.10.0
>Reporter: Arjan van den Berg
>Priority: Minor
>
> From AMQ-4740:
> I receive the following error after establishing a connection and heartbeat 
> through stomp.js. This seems to occur after the 'PING' is sent.
> -- stomp.js output
> <<< CONNECTED
> heart-beat:1,1
> session:ID:localhost.localdomain-45596-1396530920609-2:2
> server:ActiveMQ/5.10-SNAPSHOT
> version:1.1
> send PING every 1ms 
> check PONG every 1ms 
> <<< PONG 
> >>> PING 
> did not receive server activity for the last 20005ms 
> Whoops! Lost connection to ws://172.16.99.73:61614/stomp
> - activemq console ---
> WARN | Transport Connection to: StompSocket_19548821 failed: 
> java.io.IOException
> Exception in thread "ActiveMQ InactivityMonitor Worker" 
> java.lang.NullPointerException
> at 
> org.apache.activemq.transport.AbstractInactivityMonitor.onException(AbstractInactivityMonitor.java:314)
> at 
> org.apache.activemq.transport.AbstractInactivityMonitor$4.run(AbstractInactivityMonitor.java:215)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
> WARN | Transport Connection to: StompSocket_19548821 failed: 
> java.io.IOException
> Exception in thread "ActiveMQ InactivityMonitor Worker" 
> java.lang.NullPointerException
> at 
> org.apache.activemq.transport.AbstractInactivityMonitor.onException(AbstractInactivityMonitor.java:314)
> at 
> org.apache.activemq.transport.AbstractInactivityMonitor$4.run(AbstractInactivityMonitor.java:215)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
> For me it looks as if the StompInactivityMonitor is delivering its events to 
> the wrong Transport, i.e. it needs a "narrow()" when setting it up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)