[jira] [Created] (AMQ-5900) LevelDB integration does not work in Solaris

2015-07-23 Thread John Lindwall (JIRA)
John Lindwall created AMQ-5900:
--

 Summary: LevelDB integration does not work in Solaris
 Key: AMQ-5900
 URL: https://issues.apache.org/jira/browse/AMQ-5900
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.11.1, 5.9.0
 Environment: Solaris 5.11, jdk 1.7_60
Reporter: John Lindwall


I had 3 activemq nodes running using leveldb replication.  I connected a client 
that listened for messages, but I did not send any messages at all. 

I then used kill -9 to kill the master node.  The client failed to reconnect 
even though I used a failover url.  In the node2 activemq.log I see the 
following, as it attempted to become the new master: 

2015-07-22 17:57:19,334 | INFO  | Attaching to master: tcp://172.10.10.10:61619 
| org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
hawtdispatch-DEFAULT-1 
2015-07-22 17:57:19,338 | WARN  | Unexpected session error: 
java.net.ConnectException: Connection refused | 
org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
hawtdispatch-DEFAULT-1 
2015-07-22 17:57:20,044 | INFO  | Not enough cluster members have reported 
their update positions yet. | 
org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread 
2015-07-22 17:57:20,059 | INFO  | Slave stopped | 
org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ 
BrokerService[xifin] Task-3 
2015-07-22 17:57:20,061 | INFO  | Not enough cluster members have reported 
their update positions yet. | 
org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ 
BrokerService[xifin] Task-3 
2015-07-22 17:57:20,068 | INFO  | Not enough cluster members have reported 
their update positions yet. | 
org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread 
2015-07-22 17:57:20,087 | INFO  | Promoted to master | 
org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread 
2015-07-22 17:57:20,124 | INFO  | Using the pure java LevelDB implementation. | 
org.apache.activemq.leveldb.LevelDBClient | ActiveMQ BrokerService[xifin] 
Task-3 
2015-07-22 17:57:20,380 | INFO  | No IOExceptionHandler registered, ignoring IO 
exception | org.apache.activemq.broker.BrokerService | LevelDB IOException 
handler. 
java.io.IOException: org.iq80.snappy.CorruptionException: Invalid copy offset 
for opcode starting at 8 
at 
org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)[activemq-client-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[activemq-leveldb-store-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.leveldb.LevelDBClient.replay_init(LevelDBClient.scala:667)[activemq-leveldb-store-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.leveldb.LevelDBClient.start(LevelDBClient.scala:558)[activemq-leveldb-store-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.leveldb.DBManager.start(DBManager.scala:648)[activemq-leveldb-store-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.leveldb.LevelDBStore.doStart(LevelDBStore.scala:312)[activemq-leveldb-store-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.leveldb.replicated.MasterLevelDBStore.doStart(MasterLevelDBStore.scala:110)[activemq-leveldb-store-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)[activemq-client-5.11.1.jar:5.11.1]
 
at 
org.apache.activemq.leveldb.replicated.ElectingLevelDBStore$$anonfun$start_master$1.apply$mcV$sp(ElectingLevelDBStore.scala:230)[activemq-leveldb-store-5.11.1.jar:5.11.1]
 
at 
org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:330)[hawtdispatch-scala-2.11-1.21.jar:1.21]
 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_60]
 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_60]
 
at java.lang.Thread.run(Thread.java:745)[:1.7.0_60] 
2015-07-22 17:57:20,400 | INFO  | Stopped 
LevelDB[/home/jlindwall/servers/activemq-replicated-leveldb-cluster/node2/data/LevelDB]
 | org.apache.activemq.leveldb.LevelDBStore | LevelDB IOException handler. 

See also: 
http://activemq.2283324.n4.nabble.com/Leveldb-on-Solaris-td4677824.html#a4699723

Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5900) LevelDB integration does not work in Solaris

2015-07-23 Thread John Lindwall (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Lindwall updated AMQ-5900:
---
Attachment: ActiveMQFailOverMessageSender.java

My producer


 LevelDB integration does not work in Solaris
 

 Key: AMQ-5900
 URL: https://issues.apache.org/jira/browse/AMQ-5900
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.9.0, 5.11.1
 Environment: Solaris 5.11, jdk 1.7_60
Reporter: John Lindwall
 Attachments: ActiveMQFailOverMessageSender.java


 I had 3 activemq nodes running using leveldb replication.  I connected a 
 client that listened for messages, but I did not send any messages at all. 
 I then used kill -9 to kill the master node.  The client failed to 
 reconnect even though I used a failover url.  In the node2 activemq.log I see 
 the following, as it attempted to become the new master: 
 2015-07-22 17:57:19,334 | INFO  | Attaching to master: 
 tcp://172.10.10.10:61619 | 
 org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
 hawtdispatch-DEFAULT-1 
 2015-07-22 17:57:19,338 | WARN  | Unexpected session error: 
 java.net.ConnectException: Connection refused | 
 org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
 hawtdispatch-DEFAULT-1 
 2015-07-22 17:57:20,044 | INFO  | Not enough cluster members have reported 
 their update positions yet. | 
 org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread 
 2015-07-22 17:57:20,059 | INFO  | Slave stopped | 
 org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ 
 BrokerService[xifin] Task-3 
 2015-07-22 17:57:20,061 | INFO  | Not enough cluster members have reported 
 their update positions yet. | 
 org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ 
 BrokerService[xifin] Task-3 
 2015-07-22 17:57:20,068 | INFO  | Not enough cluster members have reported 
 their update positions yet. | 
 org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread 
 2015-07-22 17:57:20,087 | INFO  | Promoted to master | 
 org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread 
 2015-07-22 17:57:20,124 | INFO  | Using the pure java LevelDB implementation. 
 | org.apache.activemq.leveldb.LevelDBClient | ActiveMQ BrokerService[xifin] 
 Task-3 
 2015-07-22 17:57:20,380 | INFO  | No IOExceptionHandler registered, ignoring 
 IO exception | org.apache.activemq.broker.BrokerService | LevelDB IOException 
 handler. 
 java.io.IOException: org.iq80.snappy.CorruptionException: Invalid copy offset 
 for opcode starting at 8 
 at 
 org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)[activemq-client-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[activemq-leveldb-store-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.leveldb.LevelDBClient.replay_init(LevelDBClient.scala:667)[activemq-leveldb-store-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.leveldb.LevelDBClient.start(LevelDBClient.scala:558)[activemq-leveldb-store-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.leveldb.DBManager.start(DBManager.scala:648)[activemq-leveldb-store-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.leveldb.LevelDBStore.doStart(LevelDBStore.scala:312)[activemq-leveldb-store-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.leveldb.replicated.MasterLevelDBStore.doStart(MasterLevelDBStore.scala:110)[activemq-leveldb-store-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)[activemq-client-5.11.1.jar:5.11.1]
  
 at 
 org.apache.activemq.leveldb.replicated.ElectingLevelDBStore$$anonfun$start_master$1.apply$mcV$sp(ElectingLevelDBStore.scala:230)[activemq-leveldb-store-5.11.1.jar:5.11.1]
  
 at 
 org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:330)[hawtdispatch-scala-2.11-1.21.jar:1.21]
  
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_60]
  
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_60]
  
 at java.lang.Thread.run(Thread.java:745)[:1.7.0_60] 
 2015-07-22 17:57:20,400 | INFO  | Stopped 
 LevelDB[/home/jlindwall/servers/activemq-replicated-leveldb-cluster/node2/data/LevelDB]
  | org.apache.activemq.leveldb.LevelDBStore | LevelDB IOException handler. 
 See also: 
 http://activemq.2283324.n4.nabble.com/Leveldb-on-Solaris-td4677824.html#a4699723
 Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-154) Add MQTT Protocol Support

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-154.
---
   Resolution: Fixed
 Assignee: Martyn Taylor
Fix Version/s: 1.0.1

 Add MQTT Protocol Support
 -

 Key: ARTEMIS-154
 URL: https://issues.apache.org/jira/browse/ARTEMIS-154
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.0.1
Reporter: Martyn Taylor
Assignee: Martyn Taylor
 Fix For: 1.0.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-93) OSGI support

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-93?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-93:
---
Priority: Critical  (was: Major)

 OSGI support
 

 Key: ARTEMIS-93
 URL: https://issues.apache.org/jira/browse/ARTEMIS-93
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.1.0
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-165) Karaf CLI integration

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-165:
---

 Summary: Karaf CLI integration
 Key: ARTEMIS-165
 URL: https://issues.apache.org/jira/browse/ARTEMIS-165
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: clebert suconic






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-163) Simplify libaio native / implement 4096 (queried) alignment

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-163:
---

 Summary: Simplify libaio native / implement 4096 (queried) 
alignment
 Key: ARTEMIS-163
 URL: https://issues.apache.org/jira/browse/ARTEMIS-163
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: clebert suconic
Assignee: clebert suconic
 Fix For: 1.0.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-162) Can't create colocated HA topology with JGroups discovery

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-162:

Fix Version/s: 1.0.1

 Can't create colocated HA topology with JGroups discovery
 -

 Key: ARTEMIS-162
 URL: https://issues.apache.org/jira/browse/ARTEMIS-162
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Jeff Mesnil
Priority: Critical
 Fix For: 1.0.1


 Hi, I tried to start two Artemis nodes from our application server in 
 colocated topology with JGroups as discovery method, but i'm not able to do 
 it. After both nodes are up, this exception starts spamming in logs:
 {noformat}
 java.io.NotSerializableException: org.jgroups.JChannel
 10:56:50,187 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
 10:56:50,188 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
 10:56:50,188 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
 10:56:50,188 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
 10:56:50,188 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
 10:56:50,188 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
 10:56:50,189 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
 10:56:50,189 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
 10:56:50,189 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
 10:56:50,189 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
 10:56:50,189 ERROR [stderr] (default I/O-5) at 
 java.util.ArrayList.writeObject(ArrayList.java:747)
 10:56:50,189 ERROR [stderr] (default I/O-5) at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 10:56:50,190 ERROR [stderr] (default I/O-5) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 10:56:50,190 ERROR [stderr] (default I/O-5) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 10:56:50,190 ERROR [stderr] (default I/O-5) at 
 java.lang.reflect.Method.invoke(Method.java:483)
 10:56:50,190 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:988)
 10:56:50,190 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
 10:56:50,190 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
 10:56:50,191 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
 10:56:50,191 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
 10:56:50,191 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
 10:56:50,191 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
 10:56:50,191 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
 10:56:50,192 ERROR [stderr] (default I/O-5) at 
 java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
 10:56:50,192 ERROR [stderr] (default I/O-5) at 
 org.apache.activemq.artemis.core.config.impl.ConfigurationImpl.copy(ConfigurationImpl.java:1528)
 10:56:50,192 ERROR [stderr] (default I/O-5) at 
 org.apache.activemq.artemis.core.server.cluster.ha.ColocatedHAManager.activateReplicatedBackup(ColocatedHAManager.java:190)
 10:56:50,192 ERROR [stderr] (default I/O-5) at 
 org.apache.activemq.artemis.core.server.cluster.ha.ColocatedHAManager.activateBackup(ColocatedHAManager.java:104)
 10:56:50,192 ERROR [stderr] (default I/O-5) at 
 org.apache.activemq.artemis.core.server.impl.ColocatedActivation$1.handlePacket(ColocatedActivation.java:141)
 10:56:50,193 ERROR [stderr] (default I/O-5) at 
 org.apache.activemq.artemis.core.server.cluster.ClusterController$ClusterControllerChannelHandler.handlePacket(ClusterController.java:424)
 10:56:50,193 ERROR [stderr] (default I/O-5) at 
 org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:652)
 10:56:50,193 ERROR [stderr] (default I/O-5) at 
 org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:402)

[jira] [Updated] (ARTEMIS-160) After failback backup prints warnings to log

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-160:

Fix Version/s: 1.0.1

 After failback backup prints warnings to log
 

 Key: ARTEMIS-160
 URL: https://issues.apache.org/jira/browse/ARTEMIS-160
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Jeff Mesnil
 Fix For: 1.0.1


 We integrate Artemis in our app server.
 When the artemis server is stopped, we want to unregister any JNDI bindings 
 for the JMS resources.
 For failback, the only way to detect that the artemis server is stopped is to 
 use the ActivateCallback callback on Artemis *core* server. There is no way 
 to be notified when the JMS server (wrapping the core server) is stopped.
 This leads to a window where we remove JNDI bindings from the JMS server 
 before it is deactivated but the actual operations is performed after it was 
 deactivated and the server prints WARNING logs:
 {noformat}
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 4) WFLYMSGAMQ0004: Failed to destroy queue: ExpiryQueue: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 68) WFLYMSGAMQ0004: Failed to destroy queue: AsyncQueue: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 9) WFLYMSGAMQ0004: Failed to destroy queue: DLQ: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (ARTEMIS-160) After failback backup prints warnings to log

2015-07-23 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638652#comment-14638652
 ] 

clebert suconic edited comment on ARTEMIS-160 at 7/23/15 11:16 AM:
---

I didn't understand what you need, you want the same callback on the JMS 
manager, as well as on the core?


was (Author: clebertsuconic):
I didn't understand what you need, you want the same callback on the JMS server?

 After failback backup prints warnings to log
 

 Key: ARTEMIS-160
 URL: https://issues.apache.org/jira/browse/ARTEMIS-160
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Jeff Mesnil
 Fix For: 1.0.1


 We integrate Artemis in our app server.
 When the artemis server is stopped, we want to unregister any JNDI bindings 
 for the JMS resources.
 For failback, the only way to detect that the artemis server is stopped is to 
 use the ActivateCallback callback on Artemis *core* server. There is no way 
 to be notified when the JMS server (wrapping the core server) is stopped.
 This leads to a window where we remove JNDI bindings from the JMS server 
 before it is deactivated but the actual operations is performed after it was 
 deactivated and the server prints WARNING logs:
 {noformat}
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 4) WFLYMSGAMQ0004: Failed to destroy queue: ExpiryQueue: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 68) WFLYMSGAMQ0004: Failed to destroy queue: AsyncQueue: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 9) WFLYMSGAMQ0004: Failed to destroy queue: DLQ: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 

[jira] [Updated] (ARTEMIS-155) Incoming AMQP connection using cut-through ANONYMOUS SASL fails

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-155:

 Priority: Minor  (was: Major)
Fix Version/s: 1.0.1

Marking as minor as it will probably be postponed.. will increase priority on 
next release.

 Incoming AMQP connection using cut-through ANONYMOUS SASL fails
 -

 Key: ARTEMIS-155
 URL: https://issues.apache.org/jira/browse/ARTEMIS-155
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 1.0.0
Reporter: Ted Ross
Priority: Minor
 Fix For: 1.0.1


 When connecting an AMQP 1.0 connection to the broker using SASL ANONYMOUS, 
 the following exchange occurs:
 {noformat}
   ClientBroker
 init(SASL) -
 sasl.init (ANON) -
 init(AMQP) -
 open -
  - init(SASL)
  - sasl.mechanisms
  - sasl.outcome(OK)
  - init(AMQP)
  socket closed by broker after timeout
 {noformat}
 It appears the the broker doesn't process the open frame.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-160) After failback backup prints warnings to log

2015-07-23 Thread Jeff Mesnil (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638761#comment-14638761
 ] 

Jeff Mesnil commented on ARTEMIS-160:
-

Maybe, I don't know...

Is that a bug that code wrapped in runAfterActive is run after the server is 
deactivated?

 After failback backup prints warnings to log
 

 Key: ARTEMIS-160
 URL: https://issues.apache.org/jira/browse/ARTEMIS-160
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Jeff Mesnil
 Fix For: 1.0.1


 We integrate Artemis in our app server.
 When the artemis server is stopped, we want to unregister any JNDI bindings 
 for the JMS resources.
 For failback, the only way to detect that the artemis server is stopped is to 
 use the ActivateCallback callback on Artemis *core* server. There is no way 
 to be notified when the JMS server (wrapping the core server) is stopped.
 This leads to a window where we remove JNDI bindings from the JMS server 
 before it is deactivated but the actual operations is performed after it was 
 deactivated and the server prints WARNING logs:
 {noformat}
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 4) WFLYMSGAMQ0004: Failed to destroy queue: ExpiryQueue: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 68) WFLYMSGAMQ0004: Failed to destroy queue: AsyncQueue: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
 Thread Pool – 9) WFLYMSGAMQ0004: Failed to destroy queue: DLQ: 
 java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
 yet active
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
 at 
 org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
 at 
 org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 at org.jboss.threads.JBossThread.run(JBossThread.java:320)
 {noformat}



--
This message was 

[jira] [Commented] (ARTEMIS-93) OSGI support

2015-07-23 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-93?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638774#comment-14638774
 ] 

clebert suconic commented on ARTEMIS-93:


What I believe we really need (based on what AMQ5 does) is a big Uber JAR only.

 OSGI support
 

 Key: ARTEMIS-93
 URL: https://issues.apache.org/jira/browse/ARTEMIS-93
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.1.0
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-128) ClassCastException in openwire message conversion

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-128:

Fix Version/s: (was: 1.1.0)
   1.0.1

 ClassCastException in openwire message conversion
 -

 Key: ARTEMIS-128
 URL: https://issues.apache.org/jira/browse/ARTEMIS-128
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
  Components: OpenWire
Affects Versions: 1.0.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 1.0.1


 Some openwire unit tests send messages with groupID set to Artemis broker, 
 the conversion will get class cast exception trying to cast a SimpleString 
 type to String.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-128) ClassCastException in openwire message conversion

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-128.
---
Resolution: Fixed

 ClassCastException in openwire message conversion
 -

 Key: ARTEMIS-128
 URL: https://issues.apache.org/jira/browse/ARTEMIS-128
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
  Components: OpenWire
Affects Versions: 1.0.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 1.0.1


 Some openwire unit tests send messages with groupID set to Artemis broker, 
 the conversion will get class cast exception trying to cast a SimpleString 
 type to String.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-127) Adding activemq unit test module to Artemis

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-127.
---
Resolution: Fixed

 Adding activemq unit test module to Artemis
 ---

 Key: ARTEMIS-127
 URL: https://issues.apache.org/jira/browse/ARTEMIS-127
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
  Components: OpenWire
Affects Versions: 1.0.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 1.0.1


 Add a sub module that imports activemq(openwire) unit test suites.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-93) OSGI support

2015-07-23 Thread Daniel Kulp (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-93?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638801#comment-14638801
 ] 

Daniel Kulp commented on ARTEMIS-93:


Please don't go that route... if OSGi support can be added properly without the 
uber jar, please do so.   The uberjar is a major hack that had to be done due 
to problems (like packages in multiple jars).

 OSGI support
 

 Key: ARTEMIS-93
 URL: https://issues.apache.org/jira/browse/ARTEMIS-93
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.1.0
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-126) import ActiveMQ OpenWire tests

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-126.
---
Resolution: Fixed

 import ActiveMQ OpenWire tests
 --

 Key: ARTEMIS-126
 URL: https://issues.apache.org/jira/browse/ARTEMIS-126
 Project: ActiveMQ Artemis
  Issue Type: Task
  Components: OpenWire
Reporter: Andy Taylor
Assignee: Howard Gao
 Fix For: 1.0.1


 import any OpenWire tests so we can build up a picture of what functionality 
 is missing. Use this to build up a list of Jiras and prioritize. We should 
 probably make them sub tasks of this Jira so we can keep track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-169) Fix imported open wire tests

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-169:
---

 Summary: Fix imported open wire tests
 Key: ARTEMIS-169
 URL: https://issues.apache.org/jira/browse/ARTEMIS-169
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: clebert suconic






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-168) Pluggable ACL Hierarchies

2015-07-23 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638804#comment-14638804
 ] 

clebert suconic commented on ARTEMIS-168:
-

need to look at how AMQ5 does, and look beyond with wildcard support

 Pluggable ACL Hierarchies
 -

 Key: ARTEMIS-168
 URL: https://issues.apache.org/jira/browse/ARTEMIS-168
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0


 ActiveMQ5 has a way to plug the security-settings into LDAP | Files, or a 
 Pluggable ACL implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-146) Fix Queue auto-creation

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-146.
---
   Resolution: Fixed
Fix Version/s: (was: 1.1.0)
   1.0.1

 Fix Queue auto-creation
 ---

 Key: ARTEMIS-146
 URL: https://issues.apache.org/jira/browse/ARTEMIS-146
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
  Components: OpenWire
Affects Versions: 1.0.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 1.0.1


 Currently we use JMSQueueCreator to create queues automatically, however 
 openwire shouldn't depends on JMS stuff. If jms component is not deployed the 
 creator is null.
 Solution: use its own queue creation method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-149) Advisory Message support

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-149.
---
   Resolution: Fixed
Fix Version/s: (was: 1.1.0)
   1.0.1

 Advisory Message support
 

 Key: ARTEMIS-149
 URL: https://issues.apache.org/jira/browse/ARTEMIS-149
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
  Components: OpenWire
Affects Versions: 1.0.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 1.0.1


 Advisory messages allow people to watch/monitor certain events that are 
 happening in the broker via standard JMS messages. 
 http://activemq.apache.org/advisory-message.html
 We can utilise the core's notification mechanism to provide this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ARTEMIS-161) Graceful shutdown: add a timeout to stop Artemis

2015-07-23 Thread Justin Bertram (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram reassigned ARTEMIS-161:
--

Assignee: Justin Bertram

 Graceful shutdown: add a timeout to stop Artemis
 

 Key: ARTEMIS-161
 URL: https://issues.apache.org/jira/browse/ARTEMIS-161
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.0.0
Reporter: Jeff Mesnil
Assignee: Justin Bertram
 Fix For: 1.0.1


 We want to provide a graceful shutdown for Artemis to leave some time for JMS 
 clients to finish their work before stopping the server.
 This is also covered by ARTEMIS-72 which deals with refusing new remote 
 connections once the shutdown process is started (while keeping in-vm 
 connections opened).
 This issue is about specifying a timeout when stopping the ActiveMQServer.
 It is possible to provide a general shutdown timeout in the server 
 configuration but this is not suitable.
 A shutdown process is contextual: it may be a quick shutdown in case of 
 emergency (with a timeout of some seconds) or a long timeout (several hours) 
 in case of planned upgrade for example.
 This parameter should be specified when the admin starts the shutdown process 
 and be passed to the ActiveMQServer (and its wrapping JMSServerManger) stop() 
 method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-46) AMQP interop: Active broker does not respect the drain flag.

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-46?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-46:
---
Priority: Minor  (was: Major)

 AMQP interop: Active broker does not respect the drain flag.
 --

 Key: ARTEMIS-46
 URL: https://issues.apache.org/jira/browse/ARTEMIS-46
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 1.0.0
Reporter: Alan Conway
Priority: Minor
 Fix For: 1.0.1


 The drain flag on the AMQP flow performative allows a client to request 
 confirmation that it has received the last available message that it has 
 credit to receive.
 To reproduce using the qpid-send, qpid-receive clients from 
 http://svn.apache.org/repos/asf/qpid/trunk/qpid/. Create a JMS queue 'foo' on 
 the active broker then run:
 $ qpid-send -a jms.queue.foo -b localhost:5455 --content-string XXX 
 --connection-options='{protocol:amqp1.0}'
 $ qpid-receive -a jms.queue.foo -b localhost:5455 
 --connection-options='{protocol:amqp1.0}' --log-enable trace+:Protocol
 qpid-receive hangs, the  last line of output is:
 2014-11-24 15:15:46 [Protocol] trace [58e8ee08-0f33-426b-b77a-450f7c3d976c]: 
 0 - @flow(19) [next-incoming-id=2, incoming-window=2147483647, 
 next-outgoing-id=0, outgoing-window=0, handle=0, delivery-count=1, 
 link-credit=1, drain=true]
 This shows that qpid-receive sent a flow with drain=true but never received a 
 response.
 Why is this important? Without the drain flag it is impossible for a client 
 to implement the simple behavior get the next message correctly. The flow 
 response tells the client immediately there are no more messages available 
 for you. Without it the client can only use a timeout which is unreliable 
 (if too short the client may give up while the message is in flight) and 
 inefficient (if too long the client will wait needlessly for messages that 
 the broker knows are not presently available)
 The spec 2.6.7 is a little ambiguous about whether this is a SHOULD or a MUST 
 behavior but without it it is impossible to implement the use cases described 
 in the following section.
 AMQP 1.0 specification 2.7.6
 drain
 The drain flag indicates how the sender SHOULD behave when insufficient 
 messages are available to consume the current link-credit. If set, the sender 
 will (after sending all available messages) advance the delivery-count as 
 much as possible, consuming all link-credit, and send the flow state to the 
 receiver. Only the receiver can independently modify this field. The sender's 
 value is always the last known value indicated by the receiver.
 If the link-credit is less than or equal to zero, i.e., the delivery-count is 
 the same as or greater than the delivery-limit, a sender MUST NOT send more 
 messages. If the link-credit is reduced by the receiver when transfers are 
 in-flight, the receiver MAY either handle the excess messages normally or 
 detach the link with a transfer-limit-exceeded error code.
 Figure 2.40: Flow Control
  +--++--+
  |  Sender  |---transfer| Receiver |
  +--++--+
   \/ flow--- \/
+--++--+
   |
   |
   |
  if link-credit = 0 then pause 
   
 If the sender's drain flag is set and there are no available messages, the 
 sender MUST advance its delivery-count until link-credit is zero, and send 
 its updated flow state to the receiver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-137) Replace JBoss Logging by SLF4J

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-137:

Description: nice to support i18n
 Issue Type: Improvement  (was: Bug)
Summary: Replace JBoss Logging by SLF4J  (was: Review Logging 
implementation with i18n support)

 Replace JBoss Logging by SLF4J
 --

 Key: ARTEMIS-137
 URL: https://issues.apache.org/jira/browse/ARTEMIS-137
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: clebert suconic
 Fix For: 1.1.0


 nice to support i18n



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-108) Dynamic fail over and load balancing for OpenWire clients.

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-108:

Priority: Critical  (was: Major)

 Dynamic fail over and load balancing for OpenWire clients.
 --

 Key: ARTEMIS-108
 URL: https://issues.apache.org/jira/browse/ARTEMIS-108
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: Martyn Taylor
Priority: Critical
 Fix For: 1.1.0


 ActiveMQ OpenWire supports a feature that allows the broker to inform clients 
 when broker nodes are added and removed from the cluster.  This information 
 can be used on the client to determine which broker to reconnect to in the 
 event of a failure.  The OpenWire protocol supports this functionality via a 
 ConnectionControl packet.  The ConnectionControl packet contains information 
 about the brokers immediately connected to the current broker (used for 
 failover).
 In addition a ConnectionControl packet can instruct a client to reconnect to 
 a different broker in the cluster (used for load balancing connections).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-89) Intercepting support for stomp

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-89.
--
   Resolution: Fixed
Fix Version/s: (was: 1.1.0)
   1.0.0

 Intercepting support for stomp
 --

 Key: ARTEMIS-89
 URL: https://issues.apache.org/jira/browse/ARTEMIS-89
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.0.0
Reporter: clebert suconic
Assignee: clebert suconic
 Fix For: 1.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-170) Improve performance on AMQP

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-170:
---

 Summary: Improve performance on AMQP
 Key: ARTEMIS-170
 URL: https://issues.apache.org/jira/browse/ARTEMIS-170
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0


The performance on our AMQP implementation is not bad, but it's not at the same 
level as Core Protocol.

We should look at ways to improve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-57) the 'to' field of AMQP messages gets cleared within the broker

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-57?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-57:
---
Fix Version/s: (was: 1.1.0)
   1.0.1

 the 'to' field of AMQP messages gets cleared within the broker
 --

 Key: ARTEMIS-57
 URL: https://issues.apache.org/jira/browse/ARTEMIS-57
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 1.0.0
Reporter: Robbie Gemmell
 Fix For: 1.0.1


 When sending and receiving AMQP messages, the 'to' field of the Properties 
 section (which is meant to be immutable) is cleared as the message transits 
 through the broker.
 The encoding on the wire of a message Properties section as it was sent to 
 the broker:
 {noformat}
 small-descriptor code=0x0:0x73/ # properties
 list8 size=79 count=10 # properties
   str8-utf8 size=51 # message-id
 localhost.localdomai
 n-54104-141838672362
 2-0:1:1:1-1
   /str8-utf8
   null/ # user-id
   str8-utf8 size=7 # to
 myQueue
   /str8-utf8
   null/ # subject
   null/ # reply-to
   null/ # correlation-id
   null/ # content-type
   null/ # content-encoding
   null/ # absolute-expiry-time
   time t=1418386724423/#2014/12/12 12:18:44.423 # creation-time
   # null/ group-id
   # null/ group-sequence
   # null/ reply-to-group-id
 /list8
 {noformat}
 The encoding on the wire on its way to a consumer:
 {noformat}
 small-descriptor code=0x0:0x73/ # properties
 list8 size=19 count=10 # properties
   null/ # message-id
   null/ # user-id
   null/ # to
   null/ # subject
   null/ # reply-to
   null/ # correlation-id
   null/ # content-type
   null/ # content-encoding
   null/ # absolute-expiry-time
   time t=1418386724423/#2014/12/12 12:18:44.423 # creation-time
   # null/ group-id
   # null/ group-sequence
   # null/ reply-to-group-id
 /list8
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-12) Compare and Contrast ActiveMQ6 and ActiveMQ 5.X embedability.

2015-07-23 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-12?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638910#comment-14638910
 ] 

clebert suconic commented on ARTEMIS-12:


This is done!

 Compare and Contrast ActiveMQ6 and ActiveMQ 5.X embedability.
 -

 Key: ARTEMIS-12
 URL: https://issues.apache.org/jira/browse/ARTEMIS-12
 Project: ActiveMQ Artemis
  Issue Type: Task
Affects Versions: 1.0.0
Reporter: Martyn Taylor
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-33) Generic integration with SASL Frameworks

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-33:
---
Priority: Critical  (was: Major)

 Generic integration with SASL Frameworks
 

 Key: ARTEMIS-33
 URL: https://issues.apache.org/jira/browse/ARTEMIS-33
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.0.0
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0


 Right now we are bound to User/Password or anonymous on SASL.
 We should use some framework that would allow SASL integration with a bigger 
 number of possibilities.
 We should investigate options from the JDK for this... or if there is any 
 other framework available.
 I believe this only affects AMQP, but as part of this issue we should 
 investigate if there is any interest extending SASL into any other protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-171) Improve AMQP Performance

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-171:
---

 Summary: Improve AMQP Performance
 Key: ARTEMIS-171
 URL: https://issues.apache.org/jira/browse/ARTEMIS-171
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: clebert suconic
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2015-07-23 Thread Scott Feldstein (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638940#comment-14638940
 ] 

Scott Feldstein commented on AMQ-5082:
--

I have not seen this issue since applying the patch on top of the 
origin/activemq-5.11.x branch.

My zookeeper timeout setting is pretty high.  On activemq my zkSessionTimeout 
is 60s and time timeouts in my zoo.cfg are set to:

tickTime=5000
minSessionTimeout=2
maxSessionTimeout=18

Let me know if you want me to check the logs for anything specific.  It is 
running on DEBUG level and the logs are being forwarded to a log collection 
tool so i should be able to easily check

 ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
 ---

 Key: AMQ-5082
 URL: https://issues.apache.org/jira/browse/AMQ-5082
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.9.0, 5.10.0
Reporter: Scott Feldstein
Assignee: Christian Posta
Priority: Critical
 Fix For: 5.12.0

 Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
 mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
 zookeeper.out-cluster.failure


 I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
 persistence adapter.
 {code}
 persistenceAdapter
 replicatedLevelDB
   directory=${activemq.data}/leveldb
   replicas=3
   bind=tcp://0.0.0.0:0
   zkAddress=zookeep0:2181
   zkPath=/activemq/leveldb-stores/
 /persistenceAdapter
 {code}
 After about a day or so of sitting idle there are cascading failures and the 
 cluster completely stops listening all together.
 I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
 the three mq nodes and the zookeeper logs that reflect the time where the 
 cluster starts having issues.
 The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
 The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
 issue.
 If you need more data it should be pretty easy to get whatever is needed 
 since it is consistently reproducible.
 This bug may be related to AMQ-5026, but looks different enough to file a 
 separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-74) import ActiveMQ 5 JAAS security

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-74?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-74:
---
Fix Version/s: (was: 1.1.0)
   1.0.0

 import ActiveMQ 5 JAAS security
 ---

 Key: ARTEMIS-74
 URL: https://issues.apache.org/jira/browse/ARTEMIS-74
 Project: ActiveMQ Artemis
  Issue Type: Task
Reporter: Andy Taylor
Assignee: Andy Taylor
 Fix For: 1.0.0


 We should replace the poor JAAS implementation in ActiveMQ 6 as it is of no 
 use with the current bootstrap mechanism. We should replace it with the 
 mature implementation that is currently in ActiveMQ 5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-101) change the exp/imp to make each message to be its own XML on a zip file

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-101.
---
   Resolution: Won't Fix
Fix Version/s: (was: 1.1.0)

 change the exp/imp to make each message to be its own XML on a zip file
 ---

 Key: ARTEMIS-101
 URL: https://issues.apache.org/jira/browse/ARTEMIS-101
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: clebert suconic

 Notice that the current format has to be preserved for compatibility on the 
 exp/imp. it's an improvement on the format but the old format has to be 
 compatible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-60) Transactionally consumed AMQP messages are settled without any disposition state.

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-60?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-60:
---
Fix Version/s: (was: 1.1.0)
   1.0.1

 Transactionally consumed AMQP messages are settled without any disposition 
 state.
 -

 Key: ARTEMIS-60
 URL: https://issues.apache.org/jira/browse/ARTEMIS-60
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 1.0.0
Reporter: Robbie Gemmell
 Fix For: 1.0.1


 When the broker receives an unsettled disposition frame from a consumer 
 accepting a message using TransactionalState to make it part of a 
 transaction, it settles the message but does so with no state at all. This 
 process causes a settled disposition frame to be sent to the client which 
 contains no state. The message should retain TransactionalState linking it to 
 the transaction and its outcome.
 Similar issue to AMQ-5456 for ActiveMQ 5.
 The issue can be seen in the protocol trace below:
 {noformat}
 TCP time=17:55:01.374487 seqno=576785035 size=38
   source host=127.0.0.1 port=53919/
   target host=127.0.0.1 port=5455/
   
   frame size=38 doff=2 chan=1 
 
 small-descriptor code=0x0:0x15/ # disposition
 list8 size=25 count=5 # disposition
   true/ # role
   uint0/ # first
   uint0/ # last
   false/ # settled
   small-descriptor code=0x0:0x34/ # state   TransactionalState
   list8 size=15 count=2 # state
 bin8 size=8 # txn-id
   00 00 00 00 00 00 00 0d 
 /bin8
 small-descriptor code=0x0:0x24/ # outcome
 list0/ # accepted
   /list8
   # null/ batchable [false]
 /list8
 
   /frame
   
 /TCP
 {noformat}
 {noformat}
 TCP time=17:55:01.377185 seqno=78417459 size=20
   source host=127.0.0.1 port=5455/
   target host=127.0.0.1 port=53919/
   
   frame size=20 doff=2 chan=1 
 
 small-descriptor code=0x0:0x15/ # disposition
 list8 size=7 count=4 # disposition
   true/ # role
   small-uint 1 /small-uint # first
   small-uint 1 /small-uint # last
   true/ # settled
   # null/ state No state
   # null/ batchable [false]
 /list8
 
   /frame
   
 /TCP
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-23) Improve connection load balancing

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-23.
--
   Resolution: Duplicate
Fix Version/s: (was: 1.1.0)

 Improve connection load balancing
 -

 Key: ARTEMIS-23
 URL: https://issues.apache.org/jira/browse/ARTEMIS-23
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Reporter: clebert suconic
Assignee: Andy Taylor

 Move the connection load balancing to the server side by allowing the cluster 
 to decide where a client should connect to on initial connect, removing the 
 functionality from the client. All the client should need to know on 
 reconnect or failover is its initial servers id. we can then plumb in more 
 dynamic load balancing based on queue depth etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-108) Dynamic fail over and load balancing for OpenWire clients.

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-108:

Issue Type: Sub-task  (was: New Feature)
Parent: ARTEMIS-22

 Dynamic fail over and load balancing for OpenWire clients.
 --

 Key: ARTEMIS-108
 URL: https://issues.apache.org/jira/browse/ARTEMIS-108
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: Martyn Taylor
 Fix For: 1.1.0


 ActiveMQ OpenWire supports a feature that allows the broker to inform clients 
 when broker nodes are added and removed from the cluster.  This information 
 can be used on the client to determine which broker to reconnect to in the 
 event of a failure.  The OpenWire protocol supports this functionality via a 
 ConnectionControl packet.  The ConnectionControl packet contains information 
 about the brokers immediately connected to the current broker (used for 
 failover).
 In addition a ConnectionControl packet can instruct a client to reconnect to 
 a different broker in the cluster (used for load balancing connections).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5890) AMQP: possible NPE when handling disposition with Modified state

2015-07-23 Thread Robbie Gemmell (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638836#comment-14638836
 ] 

Robbie Gemmell commented on AMQ-5890:
-

Duplicate comment issue raised as INFRA-10038.

 AMQP: possible NPE when handling disposition with Modified state
 

 Key: AMQ-5890
 URL: https://issues.apache.org/jira/browse/AMQ-5890
 Project: ActiveMQ
  Issue Type: Bug
  Components: AMQP
Reporter: Robbie Gemmell
Assignee: Robbie Gemmell
Priority: Minor
 Fix For: 5.12.0


 If a consumer sends a disposition with Modified state in which the 
 'deliveryFailed' field is not populated, the broker will NPE. This is because 
 the relevant value is a Boolean object rather than boolean primitive. That 
 appears to be because there is actually no default value specified for the 
 field in the specification, and it is defined only to be set when delivery 
 actually failed (values that are not set are encoded nulls in the AMQP frame).
 The implementation needs updated to handle this value being null, but will be 
 left permissive of it being set false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (ARTEMIS-70) Implement resource limits

2015-07-23 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601340#comment-14601340
 ] 

clebert suconic edited comment on ARTEMIS-70 at 7/23/15 2:18 PM:
-

I implemented:

* overall number of connections
* connections per user
* queues per user

I have not implemented:

* connections per IP address
* max queue size a user can create
* names a user may call a queue that he creates
  ** constraint names


was (Author: jbertram):
I implemented:

* overall number of connections
* connections per user
* queues per user

I have not implemented:

* connections per IP address
* max queue size a user can create
* names a user may call a queue that he creates

 Implement resource limits
 -

 Key: ARTEMIS-70
 URL: https://issues.apache.org/jira/browse/ARTEMIS-70
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.0.0
Reporter: Michael Cressman
Assignee: Justin Bertram
 Fix For: 1.1.0


 Implement various resource limits within the system:
 - overall number of connections
 - connections per user
 - connections per IP address
 - queues per user
 - (possibly: number of sessions, number of subscriptions per user)
 The per user limits can be a default maximum for everyone plus specific 
 limits for particular users.
 Other things:
 - limit the max queue size a user can create
 - limit the names a user may call a queue that he creates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-70) Implement resource limits

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-70:
---
Priority: Critical  (was: Major)

 Implement resource limits
 -

 Key: ARTEMIS-70
 URL: https://issues.apache.org/jira/browse/ARTEMIS-70
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Affects Versions: 1.0.0
Reporter: Michael Cressman
Assignee: Justin Bertram
Priority: Critical
 Fix For: 1.1.0


 Implement various resource limits within the system:
 - overall number of connections
 - connections per user
 - connections per IP address
 - queues per user
 - (possibly: number of sessions, number of subscriptions per user)
 The per user limits can be a default maximum for everyone plus specific 
 limits for particular users.
 Other things:
 - limit the max queue size a user can create
 - limit the names a user may call a queue that he creates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-56) the message-id of AMQP messages gets cleared within the broker

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-56?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-56:
---
Fix Version/s: (was: 1.1.0)
   1.0.1

 the message-id of AMQP messages gets cleared within the broker
 --

 Key: ARTEMIS-56
 URL: https://issues.apache.org/jira/browse/ARTEMIS-56
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 1.0.0
Reporter: Robbie Gemmell
 Fix For: 1.0.1


 When sending and receiving AMQP messages, the message-id field of the 
 Properties section (which is meant to be immutable) is cleared as the message 
 transits through the broker.
 The encoding on the wire of a message Properties section as it was sent to 
 the broker:
 {noformat}
 small-descriptor code=0x0:0x73/ # properties
 list8 size=79 count=10 # properties
   str8-utf8 size=51 # message-id
 localhost.localdomai
 n-54104-141838672362
 2-0:1:1:1-1
   /str8-utf8
   null/ # user-id
   str8-utf8 size=7 # to
 myQueue
   /str8-utf8
   null/ # subject
   null/ # reply-to
   null/ # correlation-id
   null/ # content-type
   null/ # content-encoding
   null/ # absolute-expiry-time
   time t=1418386724423/#2014/12/12 12:18:44.423 # creation-time
   # null/ group-id
   # null/ group-sequence
   # null/ reply-to-group-id
 /list8
 {noformat}
 The encoding on the wire on its way to a consumer:
 {noformat}
 small-descriptor code=0x0:0x73/ # properties
 list8 size=19 count=10 # properties
   null/ # message-id
   null/ # user-id
   null/ # to
   null/ # subject
   null/ # reply-to
   null/ # correlation-id
   null/ # content-type
   null/ # content-encoding
   null/ # absolute-expiry-time
   time t=1418386724423/#2014/12/12 12:18:44.423 # creation-time
   # null/ group-id
   # null/ group-sequence
   # null/ reply-to-group-id
 /list8
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-170) Improve AMQP

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-170:

Summary: Improve AMQP  (was: Improve performance on AMQP)

 Improve AMQP
 

 Key: ARTEMIS-170
 URL: https://issues.apache.org/jira/browse/ARTEMIS-170
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0


 The performance on our AMQP implementation is not bad, but it's not at the 
 same level as Core Protocol.
 We should look at ways to improve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-137) Replace JBoss Logging by SLF4J

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-137:

Priority: Critical  (was: Major)

 Replace JBoss Logging by SLF4J
 --

 Key: ARTEMIS-137
 URL: https://issues.apache.org/jira/browse/ARTEMIS-137
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0


 nice to support i18n



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-74) import ActiveMQ 5 JAAS security

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-74?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-74.
--
Resolution: Fixed

 import ActiveMQ 5 JAAS security
 ---

 Key: ARTEMIS-74
 URL: https://issues.apache.org/jira/browse/ARTEMIS-74
 Project: ActiveMQ Artemis
  Issue Type: Task
Reporter: Andy Taylor
Assignee: Andy Taylor
 Fix For: 1.0.0


 We should replace the poor JAAS implementation in ActiveMQ 6 as it is of no 
 use with the current bootstrap mechanism. We should replace it with the 
 mature implementation that is currently in ActiveMQ 5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5761) Back port expoter / importer from Artemis

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated AMQ-5761:
-
Description: This feature is about export ActiveMQ5 messages into an 
independent format (Artemis has one) and import it into Artemis.  (was: this is 
related to https://issues.apache.org/jira/browse/ACTIVEMQ6-101

This should use the new XML format already.)

 Back port expoter / importer from Artemis
 -

 Key: AMQ-5761
 URL: https://issues.apache.org/jira/browse/AMQ-5761
 Project: ActiveMQ
  Issue Type: New Feature
Reporter: clebert suconic

 This feature is about export ActiveMQ5 messages into an independent format 
 (Artemis has one) and import it into Artemis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-81) Verify the activemq-rest is working and review messaging model

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-81:
---
Priority: Critical  (was: Major)

 Verify the activemq-rest is working and review messaging model
 --

 Key: ARTEMIS-81
 URL: https://issues.apache.org/jira/browse/ARTEMIS-81
 Project: ActiveMQ Artemis
  Issue Type: Task
Reporter: Martyn Taylor
Assignee: Martyn Taylor
Priority: Critical
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-172) Improve AMQP-JMS mapping on server

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-172:

Priority: Critical  (was: Major)

 Improve AMQP-JMS mapping on server
 --

 Key: ARTEMIS-172
 URL: https://issues.apache.org/jira/browse/ARTEMIS-172
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-21) Created queues from management should be part of the configuration

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-21.
--
   Resolution: Fixed
Fix Version/s: (was: 1.1.0)

 Created queues from management should be part of the configuration
 --

 Key: ARTEMIS-21
 URL: https://issues.apache.org/jira/browse/ARTEMIS-21
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: clebert suconic

 When you create a queue / topic / any type of destination we will offer in 
 the future from management it must be set as part of the configuration 
 automatically, so in case all the data is reset the destinations should be 
 preserved as part of the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-175) CLI improvement with --docker

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-175:
---

 Summary: CLI improvement with --docker
 Key: ARTEMIS-175
 URL: https://issues.apache.org/jira/browse/ARTEMIS-175
 Project: ActiveMQ Artemis
  Issue Type: Task
Reporter: clebert suconic
Priority: Minor
 Fix For: 1.0.1


that would help to keep servers alive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-173) Document migration path for all activemq5 features

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-173:
---

 Summary: Document migration path for all activemq5 features
 Key: ARTEMIS-173
 URL: https://issues.apache.org/jira/browse/ARTEMIS-173
 Project: ActiveMQ Artemis
  Issue Type: Task
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2015-07-23 Thread Jim Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638990#comment-14638990
 ] 

Jim Robinson commented on AMQ-5082:
---

Lars,

Thanks for the reply.  Can you tell me whether or not your zkSessionTimeout 
matches the timeout value of your zookeeper server?  I'm not sure what playing 
with the value means. :)


 ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
 ---

 Key: AMQ-5082
 URL: https://issues.apache.org/jira/browse/AMQ-5082
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.9.0, 5.10.0
Reporter: Scott Feldstein
Assignee: Christian Posta
Priority: Critical
 Fix For: 5.12.0

 Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
 mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
 zookeeper.out-cluster.failure


 I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
 persistence adapter.
 {code}
 persistenceAdapter
 replicatedLevelDB
   directory=${activemq.data}/leveldb
   replicas=3
   bind=tcp://0.0.0.0:0
   zkAddress=zookeep0:2181
   zkPath=/activemq/leveldb-stores/
 /persistenceAdapter
 {code}
 After about a day or so of sitting idle there are cascading failures and the 
 cluster completely stops listening all together.
 I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
 the three mq nodes and the zookeeper logs that reflect the time where the 
 cluster starts having issues.
 The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
 The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
 issue.
 If you need more data it should be pretty easy to get whatever is needed 
 since it is consistently reproducible.
 This bug may be related to AMQ-5026, but looks different enough to file a 
 separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2015-07-23 Thread Jim Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638995#comment-14638995
 ] 

Jim Robinson commented on AMQ-5082:
---

Ok, so that looks very much like what I have seen on my dev cluster.  Can you 
tell me whether or not your zkSessionTimeout value matches the value used in 
your zookeeper cluster?

 ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
 ---

 Key: AMQ-5082
 URL: https://issues.apache.org/jira/browse/AMQ-5082
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.9.0, 5.10.0
Reporter: Scott Feldstein
Assignee: Christian Posta
Priority: Critical
 Fix For: 5.12.0

 Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
 mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
 zookeeper.out-cluster.failure


 I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
 persistence adapter.
 {code}
 persistenceAdapter
 replicatedLevelDB
   directory=${activemq.data}/leveldb
   replicas=3
   bind=tcp://0.0.0.0:0
   zkAddress=zookeep0:2181
   zkPath=/activemq/leveldb-stores/
 /persistenceAdapter
 {code}
 After about a day or so of sitting idle there are cascading failures and the 
 cluster completely stops listening all together.
 I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
 the three mq nodes and the zookeeper logs that reflect the time where the 
 cluster starts having issues.
 The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
 The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
 issue.
 If you need more data it should be pretty easy to get whatever is needed 
 since it is consistently reproducible.
 This bug may be related to AMQ-5026, but looks different enough to file a 
 separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-174) Separated Journal per Address

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-174:
---

 Summary: Separated Journal per Address
 Key: ARTEMIS-174
 URL: https://issues.apache.org/jira/browse/ARTEMIS-174
 Project: ActiveMQ Artemis
  Issue Type: Task
Reporter: clebert suconic
Priority: Minor
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-168) Pluggable ACL Hierarchies

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-168:

Priority: Critical  (was: Major)

 Pluggable ACL Hierarchies
 -

 Key: ARTEMIS-168
 URL: https://issues.apache.org/jira/browse/ARTEMIS-168
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0


 ActiveMQ5 has a way to plug the security-settings into LDAP | Files, or a 
 Pluggable ACL implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5899) Unable to recover after going below viable H/A master/slave (Unkown data type

2015-07-23 Thread Gabriel Nieves (JIRA)
Gabriel Nieves created AMQ-5899:
---

 Summary: Unable to recover after going below viable H/A 
master/slave (Unkown data type
 Key: AMQ-5899
 URL: https://issues.apache.org/jira/browse/AMQ-5899
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.10.0
 Environment: 3 CentOS Servers running ActiveMQ (5.10.0 or 5.11.0), 
each connect in different sites. Using H/A master slave concept. SSL enabled. 
using levelDB. Using Zookeeper. 
Reporter: Gabriel Nieves


I have 3 servers running ActiveMQ in High availability mode. Lets call these 
server A, B and C, and lets say A is master. if you stop 2 servers, A and B, 
while at the same time you are send messages to server A, everything will go 
down; which makes since. Now if you start up you start up A and B, you will get 
an Unknown data type and a javax.IOException or a null pointer excepting after 
a master has been selected and a slave has attached.

I suspect this is caused mainly because during the time these servers stopped 
replication was occurring, thus corrupting the levelDB. I say corrupting, 
however there has been cases were I only started one up after going below 
viable and everything worked fine, so this could be caused by a synchronization 
issue with levelDB replication.

After I get this Unknown data type error which value changes every time I 
replicated this issue,  the master server will restart. This happens many times 
and eventually the ActiveMQ process dies.

So far to get these server up an running again I need to clear the 
activemq-data folder where all the replication logs are located. This is not an 
acceptable solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-22) Review OpenWire Implementation

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-22:
---
Priority: Critical  (was: Major)

 Review OpenWire Implementation
 --

 Key: ARTEMIS-22
 URL: https://issues.apache.org/jira/browse/ARTEMIS-22
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: OpenWire
Affects Versions: 1.0.0
Reporter: clebert suconic
Priority: Critical
 Fix For: 1.1.0


 This is a review of the current open wire implementation on the activemq6 
 branch
 I would suggest someone with experience on activemq5 team to review the 
 implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-166) Karaf WARs integration

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-166:
---

 Summary: Karaf WARs integration
 Key: ARTEMIS-166
 URL: https://issues.apache.org/jira/browse/ARTEMIS-166
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: clebert suconic


Karaf has the possibility of adding WARs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-167) OSGI artemis extensions (e.g vertx integration)

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-167:

Priority: Minor  (was: Major)

 OSGI artemis extensions (e.g vertx integration)
 ---

 Key: ARTEMIS-167
 URL: https://issues.apache.org/jira/browse/ARTEMIS-167
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: clebert suconic
Priority: Minor
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-167) OSGI artemis extensions (e.g vertx integration)

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-167:
---

 Summary: OSGI artemis extensions (e.g vertx integration)
 Key: ARTEMIS-167
 URL: https://issues.apache.org/jira/browse/ARTEMIS-167
 Project: ActiveMQ Artemis
  Issue Type: Sub-task
Reporter: clebert suconic






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-168) Pluggable ACL Hierarchies

2015-07-23 Thread clebert suconic (JIRA)
clebert suconic created ARTEMIS-168:
---

 Summary: Pluggable ACL Hierarchies
 Key: ARTEMIS-168
 URL: https://issues.apache.org/jira/browse/ARTEMIS-168
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: clebert suconic


ActiveMQ5 has a way to plug the security-settings into LDAP | Files, or a 
Pluggable ACL implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-168) Pluggable ACL Hierarchies

2015-07-23 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-168:

Fix Version/s: 1.1.0

 Pluggable ACL Hierarchies
 -

 Key: ARTEMIS-168
 URL: https://issues.apache.org/jira/browse/ARTEMIS-168
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: clebert suconic
 Fix For: 1.1.0


 ActiveMQ5 has a way to plug the security-settings into LDAP | Files, or a 
 Pluggable ACL implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-162) Can't create colocated HA topology with JGroups discovery

2015-07-23 Thread Jeff Mesnil (JIRA)
Jeff Mesnil created ARTEMIS-162:
---

 Summary: Can't create colocated HA topology with JGroups discovery
 Key: ARTEMIS-162
 URL: https://issues.apache.org/jira/browse/ARTEMIS-162
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Jeff Mesnil
Priority: Critical


Hi, I tried to start two Artemis nodes from our application server in colocated 
topology with JGroups as discovery method, but i'm not able to do it. After 
both nodes are up, this exception starts spamming in logs:

{noformat}
java.io.NotSerializableException: org.jgroups.JChannel
10:56:50,187 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
10:56:50,188 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
10:56:50,188 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
10:56:50,188 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
10:56:50,188 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
10:56:50,188 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
10:56:50,189 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
10:56:50,189 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
10:56:50,189 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
10:56:50,189 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
10:56:50,189 ERROR [stderr] (default I/O-5) at 
java.util.ArrayList.writeObject(ArrayList.java:747)
10:56:50,189 ERROR [stderr] (default I/O-5) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
10:56:50,190 ERROR [stderr] (default I/O-5) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
10:56:50,190 ERROR [stderr] (default I/O-5) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
10:56:50,190 ERROR [stderr] (default I/O-5) at 
java.lang.reflect.Method.invoke(Method.java:483)
10:56:50,190 ERROR [stderr] (default I/O-5) at 
java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:988)
10:56:50,190 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
10:56:50,190 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
10:56:50,191 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
10:56:50,191 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
10:56:50,191 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
10:56:50,191 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
10:56:50,191 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
10:56:50,192 ERROR [stderr] (default I/O-5) at 
java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
10:56:50,192 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.config.impl.ConfigurationImpl.copy(ConfigurationImpl.java:1528)
10:56:50,192 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.server.cluster.ha.ColocatedHAManager.activateReplicatedBackup(ColocatedHAManager.java:190)
10:56:50,192 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.server.cluster.ha.ColocatedHAManager.activateBackup(ColocatedHAManager.java:104)
10:56:50,192 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.server.impl.ColocatedActivation$1.handlePacket(ColocatedActivation.java:141)
10:56:50,193 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.server.cluster.ClusterController$ClusterControllerChannelHandler.handlePacket(ClusterController.java:424)
10:56:50,193 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:652)
10:56:50,193 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:402)
10:56:50,193 ERROR [stderr] (default I/O-5) at 
org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:379)
10:56:50,193 ERROR [stderr] (default I/O-5) at 

[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2015-07-23 Thread Lars Neumann (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638341#comment-14638341
 ] 

Lars Neumann commented on AMQ-5082:
---

Hi,

I applied to patch to two brokers but they still become unresponsive, though it 
seems like there are up a bit longer than without the patch. Playing with the 
{{zksessionTimeout}} didn't help.

Cheers,
Lars

 ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
 ---

 Key: AMQ-5082
 URL: https://issues.apache.org/jira/browse/AMQ-5082
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.9.0, 5.10.0
Reporter: Scott Feldstein
Assignee: Christian Posta
Priority: Critical
 Fix For: 5.12.0

 Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
 mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
 zookeeper.out-cluster.failure


 I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
 persistence adapter.
 {code}
 persistenceAdapter
 replicatedLevelDB
   directory=${activemq.data}/leveldb
   replicas=3
   bind=tcp://0.0.0.0:0
   zkAddress=zookeep0:2181
   zkPath=/activemq/leveldb-stores/
 /persistenceAdapter
 {code}
 After about a day or so of sitting idle there are cascading failures and the 
 cluster completely stops listening all together.
 I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
 the three mq nodes and the zookeeper logs that reflect the time where the 
 cluster starts having issues.
 The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
 The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
 issue.
 If you need more data it should be pretty easy to get whatever is needed 
 since it is consistently reproducible.
 This bug may be related to AMQ-5026, but looks different enough to file a 
 separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2015-07-23 Thread Anuj Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638308#comment-14638308
 ] 

Anuj Khandelwal commented on AMQ-5082:
--

Hi,

It was really helpful to see this patch. With this path broker cluster should 
not become orphan.  However, It is still not clear to me why zookeeper server 
is not able to reply within the timeframe. This error still comes even after 
increasing the zksessionTimeout. 

Details from zookeeper: From what i understand the client sends a ping every 
1/3 the timeout, and then looks for a response before another 1/3 elapses. This 
allows time to reconnect to a different server (and still maintain the session) 
if the current server were to become unavailable.

2014-03-04 04:56:46,861 | INFO  | Client session timed out, have not heard 
from server in 2667ms for sessionid 0x1437e99789c040c, closing socket 
connection and attempting reconnect | org.apache.zookeeper.ClientCnxn | 
main-SendThread(10.1.1.230:2181)

Thanks,
Anuj

 ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
 ---

 Key: AMQ-5082
 URL: https://issues.apache.org/jira/browse/AMQ-5082
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.9.0, 5.10.0
Reporter: Scott Feldstein
Assignee: Christian Posta
Priority: Critical
 Fix For: 5.12.0

 Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
 mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
 zookeeper.out-cluster.failure


 I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
 persistence adapter.
 {code}
 persistenceAdapter
 replicatedLevelDB
   directory=${activemq.data}/leveldb
   replicas=3
   bind=tcp://0.0.0.0:0
   zkAddress=zookeep0:2181
   zkPath=/activemq/leveldb-stores/
 /persistenceAdapter
 {code}
 After about a day or so of sitting idle there are cascading failures and the 
 cluster completely stops listening all together.
 I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
 the three mq nodes and the zookeeper logs that reflect the time where the 
 cluster starts having issues.
 The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
 The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
 issue.
 If you need more data it should be pretty easy to get whatever is needed 
 since it is consistently reproducible.
 This bug may be related to AMQ-5026, but looks different enough to file a 
 separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)