[jira] [Assigned] (AMQ-4906) advisory producerCount = 0 is not received on temporary queue
[ https://issues.apache.org/jira/browse/AMQ-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Davies reassigned AMQ-4906: --- Assignee: Rob Davies advisory producerCount = 0 is not received on temporary queue - Key: AMQ-4906 URL: https://issues.apache.org/jira/browse/AMQ-4906 Project: ActiveMQ Issue Type: Bug Components: Broker Affects Versions: 5.7.0 Reporter: Christian Mamen Assignee: Rob Davies i notice i never receive producer advisory messages (ActiveMQ.Advisory.Producer.Queue .[...]) with producerCount=0 when the client message producers on temporary queue are closed. i do receive producerCount 0. However, the consumerCount (from ActiveMQ.Advisory.Consumer.Queue.[...]) appears to work as expected. From looking into org.apache.activemq.advisory.AdvisoryBroker.java {code} @Override public void removeProducer(ConnectionContext context, ProducerInfo info) throws Exception { super.removeProducer(context, info); // Don't advise advisory topics. ActiveMQDestination dest = info.getDestination(); if (info.getDestination() != null !AdvisorySupport.isAdvisoryTopic(dest)) { ActiveMQTopic topic = AdvisorySupport.getProducerAdvisoryTopic(dest); producers.remove(info.getProducerId()); if (!dest.isTemporary() || destinations.contains(dest)) { // PLEASE NOTE: == could this actually be destinations.containsKey(dest) fireProducerAdvisory(context, dest,topic, info.createRemoveCommand()); } } } {code} as reference, the working removeConsumer method: {code} @Override public void removeConsumer(ConnectionContext context, ConsumerInfo info) throws Exception { super.removeConsumer(context, info); // Don't advise advisory topics. ActiveMQDestination dest = info.getDestination(); if (!AdvisorySupport.isAdvisoryTopic(dest)) { ActiveMQTopic topic = AdvisorySupport.getConsumerAdvisoryTopic(dest); consumers.remove(info); if (!dest.isTemporary() || destinations.containsKey(dest)) { fireConsumerAdvisory(context,dest, topic, info.createRemoveCommand()); } } } {code} Please note the destinations.containsKey(dest) vs destinations.contains(dest) (for concurrentHashMaps this is identical to containsValue()). I'm assuming the logic is to make sure the producer destination do exist in both cases I tested this with 5.7.0. the code is similar in 5.9.0 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (AMQ-4906) advisory producerCount = 0 is not received on temporary queue
[ https://issues.apache.org/jira/browse/AMQ-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Davies resolved AMQ-4906. - Resolution: Fixed Fix Version/s: 5.10.0 advisory producerCount = 0 is not received on temporary queue - Key: AMQ-4906 URL: https://issues.apache.org/jira/browse/AMQ-4906 Project: ActiveMQ Issue Type: Bug Components: Broker Affects Versions: 5.7.0 Reporter: Christian Mamen Assignee: Rob Davies Fix For: 5.10.0 i notice i never receive producer advisory messages (ActiveMQ.Advisory.Producer.Queue .[...]) with producerCount=0 when the client message producers on temporary queue are closed. i do receive producerCount 0. However, the consumerCount (from ActiveMQ.Advisory.Consumer.Queue.[...]) appears to work as expected. From looking into org.apache.activemq.advisory.AdvisoryBroker.java {code} @Override public void removeProducer(ConnectionContext context, ProducerInfo info) throws Exception { super.removeProducer(context, info); // Don't advise advisory topics. ActiveMQDestination dest = info.getDestination(); if (info.getDestination() != null !AdvisorySupport.isAdvisoryTopic(dest)) { ActiveMQTopic topic = AdvisorySupport.getProducerAdvisoryTopic(dest); producers.remove(info.getProducerId()); if (!dest.isTemporary() || destinations.contains(dest)) { // PLEASE NOTE: == could this actually be destinations.containsKey(dest) fireProducerAdvisory(context, dest,topic, info.createRemoveCommand()); } } } {code} as reference, the working removeConsumer method: {code} @Override public void removeConsumer(ConnectionContext context, ConsumerInfo info) throws Exception { super.removeConsumer(context, info); // Don't advise advisory topics. ActiveMQDestination dest = info.getDestination(); if (!AdvisorySupport.isAdvisoryTopic(dest)) { ActiveMQTopic topic = AdvisorySupport.getConsumerAdvisoryTopic(dest); consumers.remove(info); if (!dest.isTemporary() || destinations.containsKey(dest)) { fireConsumerAdvisory(context,dest, topic, info.createRemoveCommand()); } } } {code} Please note the destinations.containsKey(dest) vs destinations.contains(dest) (for concurrentHashMaps this is identical to containsValue()). I'm assuming the logic is to make sure the producer destination do exist in both cases I tested this with 5.7.0. the code is similar in 5.9.0 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (AMQ-4682) runtime configuration - allow selective application of changes to xml configuration without broker restart
[ https://issues.apache.org/jira/browse/AMQ-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834800#comment-13834800 ] Tamilmaran commented on AMQ-4682: - By using this configuration can we change the port of transport connector or add a new transport connector? runtime configuration - allow selective application of changes to xml configuration without broker restart -- Key: AMQ-4682 URL: https://issues.apache.org/jira/browse/AMQ-4682 Project: ActiveMQ Issue Type: New Feature Components: Broker Affects Versions: 5.8.0 Reporter: Gary Tully Assignee: Gary Tully Labels: configuration, runtime, updates Fix For: 5.9.0 support on the fly configuration changes where appropriate. Via JMX it is possible to make changes but they don't persist. Via osgi we can restart the broker to pick up changes to xml config but where it makes sense, we should be able to apply changes on the fly. A first example would be the addition on a new network connector by the addition of the relevant xml config (edit or copy over) that is in use by the broker. -- This message was sent by Atlassian JIRA (v6.1#6144)
Jenkins build is back to normal : ActiveMQ-Trunk-Deploy » ActiveMQ :: All JAR bundle #891
See https://builds.apache.org/job/ActiveMQ-Trunk-Deploy/org.apache.activemq$activemq-all/891/
Jenkins build became unstable: ActiveMQ » ActiveMQ :: MQTT Protocol #1424
See https://builds.apache.org/job/ActiveMQ/org.apache.activemq$activemq-mqtt/1424/
[jira] [Commented] (AMQ-4682) runtime configuration - allow selective application of changes to xml configuration without broker restart
[ https://issues.apache.org/jira/browse/AMQ-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834910#comment-13834910 ] Gary Tully commented on AMQ-4682: - @tamilmaran - not at the moment. Please open an enhancement jira for that with some detail on the supporting use case, will see to get it done for 5.10 runtime configuration - allow selective application of changes to xml configuration without broker restart -- Key: AMQ-4682 URL: https://issues.apache.org/jira/browse/AMQ-4682 Project: ActiveMQ Issue Type: New Feature Components: Broker Affects Versions: 5.8.0 Reporter: Gary Tully Assignee: Gary Tully Labels: configuration, runtime, updates Fix For: 5.9.0 support on the fly configuration changes where appropriate. Via JMX it is possible to make changes but they don't persist. Via osgi we can restart the broker to pick up changes to xml config but where it makes sense, we should be able to apply changes on the fly. A first example would be the addition on a new network connector by the addition of the relevant xml config (edit or copy over) that is in use by the broker. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (AMQ-4882) LevelDB can get to a corrupt state
[ https://issues.apache.org/jira/browse/AMQ-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remo Gloor updated AMQ-4882: Attachment: (was: TestClient.zip) LevelDB can get to a corrupt state -- Key: AMQ-4882 URL: https://issues.apache.org/jira/browse/AMQ-4882 Project: ActiveMQ Issue Type: Bug Components: activemq-leveldb-store Affects Versions: 5.9.0 Reporter: Remo Gloor Priority: Critical Attachments: TestClient.zip, activemq.log A consumer/producer with failover transport is connected to AMQ and processes messages in XA Transactions. When AMQ is restarted is can happen that LevelDB gets to a corrupt state so that AMQ can not be started anymore without deletind the database. Reproduction: - Configure AMQ with LevelDB - Run the attached TestClient - Restart AMQ several times. At some time it won't start anymore and produced the exception in the attached log file. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (AMQ-4882) LevelDB can get to a corrupt state
[ https://issues.apache.org/jira/browse/AMQ-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remo Gloor updated AMQ-4882: Attachment: TestClient.zip LevelDB can get to a corrupt state -- Key: AMQ-4882 URL: https://issues.apache.org/jira/browse/AMQ-4882 Project: ActiveMQ Issue Type: Bug Components: activemq-leveldb-store Affects Versions: 5.9.0 Reporter: Remo Gloor Priority: Critical Attachments: TestClient.zip, activemq.log A consumer/producer with failover transport is connected to AMQ and processes messages in XA Transactions. When AMQ is restarted is can happen that LevelDB gets to a corrupt state so that AMQ can not be started anymore without deletind the database. Reproduction: - Configure AMQ with LevelDB - Run the attached TestClient - Restart AMQ several times. At some time it won't start anymore and produced the exception in the attached log file. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (AMQ-4837) LevelDB corrupted in AMQ cluster
[ https://issues.apache.org/jira/browse/AMQ-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834918#comment-13834918 ] Remo Gloor commented on AMQ-4837: - https://issues.apache.org/jira/browse/AMQ-4882 is the unclustered version of this problem. I can reproduce this by 100% just by starting some client that processes and enqueues messages while starting/stopping AMQ. After some restarts AMQ won't start anymore and the log contains the exception posted by others in this issue. LevelDB corrupted in AMQ cluster Key: AMQ-4837 URL: https://issues.apache.org/jira/browse/AMQ-4837 Project: ActiveMQ Issue Type: Bug Components: activemq-leveldb-store Affects Versions: 5.9.0 Environment: CentOS, Linux version 2.6.32-71.29.1.el6.x86_64 java-1.7.0-openjdk.x86_64/java-1.6.0-openjdk.x86_64 zookeeper-3.4.5.2 Reporter: Guillaume Assignee: Hiram Chirino Priority: Critical Attachments: LevelDBCorrupted.zip, activemq.xml I have clustered 3 ActiveMQ instances using replicated leveldb and zookeeper. When performing some tests using Web UI, I can across issues that appears to corrupt the leveldb data files. The issue can be replicated by performing the following steps: 1.Start 3 activemq nodes. 2.Push a message to the master (Node1) and browse the queue using the web UI 3.Stop master node (Node1) 4.Push a message to the new master (Node2) and browse the queue using the web UI. Message summary and queue content ok. 5.Start Node1 6.Stop master node (Node2) 7.Browse the queue using the web UI on new master (Node3). Message summary ok however when clicking on the queue, no message details. An error (see below) is logged by the master, which attempts a restart. From this point, the database appears to be corrupted and the same error occurs to each node infinitely (shutdown/restart). The only way around is to stop the nodes and clear the data files. However when a message is pushed between step 5 and 6, the error doesn’t occur. = Leveldb configuration on the 3 instances: persistenceAdapter replicatedLevelDB directory=${activemq.data}/leveldb replicas=3 bind=tcp://0.0.0.0:0 zkAddress=zkserver:2181 zkPath=/activemq/leveldb-stores / /persistenceAdapter = The error is: INFO | Stopping BrokerService[localhost] due to exception, java.io.IOException java.io.IOException at org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39) at org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:543) at org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:974) at org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1270) at org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1194) at org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:708) at org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:741) at org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:106) at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:258) at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:108) at org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157) at org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1875) at org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2086) at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1581) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.NullPointerException at org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1198) at
[jira] [Created] (AMQ-4907) kahadb - do some sanity check on the index when checkForCorruptJournalFiles
Gary Tully created AMQ-4907: --- Summary: kahadb - do some sanity check on the index when checkForCorruptJournalFiles Key: AMQ-4907 URL: https://issues.apache.org/jira/browse/AMQ-4907 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.9.0 Reporter: Gary Tully Assignee: Gary Tully Fix For: 5.10.0 When the index is corrupt all bets are off and we need to replay the journal to rebuild the index. We do this automatically on a failure to load the index. When the index loads and is still corrupt, we can resume and messages are unavailable. Adding some sanity checking to the index, when checkForCorruptJournalFiles is enabled (paranoid mode) will allow us to detect corruption and force an auto recreation. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails
[ https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] matteo rulli updated AMQ-4889: -- Attachment: AMQ-4889-patch__5.9.0.txt I attached a possible patch. As long as I can see, the problem was adding ProxyConnection objs into a collection inside ProxyConnector before starting them. Maybe an amq guru can check this and gives some feedbacks? Thanks! ProxyConnector memory usage skyrockets when several ssl handshakes fails Key: AMQ-4889 URL: https://issues.apache.org/jira/browse/AMQ-4889 Project: ActiveMQ Issue Type: Bug Components: Broker Affects Versions: 5.8.0, 5.9.0 Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and Linux RHEL 6.3 64 bit Reporter: matteo rulli Attachments: AMQ-4889-patch__5.9.0.txt, ProxyConnIssue.rar See [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html] for further details. To reproduce the issue: # Start embedded proxy broker and the AMQ broker that are embedded in *AMQTestBroker* project (see attachments); # Start the *AMQTestConsumer* project; This program repeatedly tries opening a connection to the ProxyConnector with wrong certificates. # Open jconsole to monitor AMQTestBroker memory usage: you should experience an OOM error within one hour with the suggested settings (Xmx = 2048m). Launch configurations and test keystores are attached to this issue along with the java projects. This behavior seems to affect _ProxyConnector_ only, running the test against a standard nio-based _TransportConnector_ does not seem to produce anomalous memory consumptions. -- This message was sent by Atlassian JIRA (v6.1#6144)