RE: Why is the amount of Enqueued less than Dequeued

2013-11-01 Thread Joshua.X.Peng (mis.usca08.Newegg) 22507


Best Regards
Joshua Peng


CONFIDENTIALITY NOTICE: This email and any files transmitted with it may 
contain privileged or otherwise confidential information.  It is intended only 
for the person or persons to whom it is addressed. If you received this message 
in error, you are not authorized to read, print, retain, copy, disclose, 
disseminate, distribute, or use this message any part thereof or any 
information contained therein. Please notify the sender immediately and delete 
all copies of this message. Thank you in advance for your cooperation.

保密注意:此邮件及其附随文件可能包含了保密信息。该邮件的目的是发送给指定收件人。如果您非指定收件人而错误地收到了本邮件,您将无权阅读、打印、保存、复制、泄露、传播、分发或使用此邮件全部或部分内容或者邮件中包含的任何信息。请立即通知发件人,并删除该邮件。感谢您的配合!

From: Joshua.X.Peng (mis.usca08.Newegg) 22507
Sent: Friday, November 01, 2013 4:47 PM
To: 'dev-subscr...@activemq.apache.org'
Subject: Why is the amount of Enqueued less than Dequeued
Importance: High

Dear Activmq team:
Our project has used ActiveMQ for the message server, but there is a question 
want to consult with you.

Why is the amount of Enqueued less than Dequeued?  As I know, one message don’t 
dispatch to more than one consumer.
dequeued messages = messages delivered from the destination to consumers. this 
number can be higher that the number of enqueued messages if a message was 
delivered to multiple consumers (topics).  But we don’t use Topic model.

Our project should avoid repeatedly consume the message.

[cid:image001.png@01CED722.4F9EF340]


Version: ActiveMQ-5.7.0;






Not DUPS_OK_ACKNOWLEDGE;


Best Regards
Joshua Peng


CONFIDENTIALITY NOTICE: This email and any files transmitted with it may 
contain privileged or otherwise confidential information.  It is intended only 
for the person or persons to whom it is addressed. If you received this message 
in error, you are not authorized to read, print, retain, copy, disclose, 
disseminate, distribute, or use this message any part thereof or any 
information contained therein. Please notify the sender immediately and delete 
all copies of this message. Thank you in advance for your cooperation.

保密注意:此邮件及其附随文件可能包含了保密信息。该邮件的目的是发送给指定收件人。如果您非指定收件人而错误地收到了本邮件,您将无权阅读、打印、保存、复制、泄露、传播、分发或使用此邮件全部或部分内容或者邮件中包含的任何信息。请立即通知发件人,并删除该邮件。感谢您的配合!



[jira] [Commented] (AMQ-4837) LevelDB corrupted in AMQ cluster

2013-11-01 Thread Hiram Chirino (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13811515#comment-13811515
 ] 

Hiram Chirino commented on AMQ-4837:


Hi Guillaume,

Ok. I think I got it this time.  Could you check this build for me?

https://repository.apache.org/content/repositories/snapshots/org/apache/activemq/apache-activemq/5.10-SNAPSHOT/apache-activemq-5.10-20131101.162431-14-bin.tar.gz

> LevelDB corrupted in AMQ cluster
> 
>
> Key: AMQ-4837
> URL: https://issues.apache.org/jira/browse/AMQ-4837
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0
> Environment: CentOS, Linux version 2.6.32-71.29.1.el6.x86_64
> java-1.7.0-openjdk.x86_64/java-1.6.0-openjdk.x86_64
> zookeeper-3.4.5.2
>Reporter: Guillaume
>Assignee: Hiram Chirino
>Priority: Critical
> Attachments: LevelDBCorrupted.zip
>
>
> I have clustered 3 ActiveMQ instances using replicated leveldb and zookeeper. 
> When performing some tests using Web UI, I can across issues that appears to 
> corrupt the leveldb data files.
> The issue can be replicated by performing the following steps:
> 1.Start 3 activemq nodes.
> 2.Push a message to the master (Node1) and browse the queue using the web 
> UI
> 3.Stop master node (Node1)
> 4.Push a message to the new master (Node2) and browse the queue using the 
> web UI. Message summary and queue content ok.
> 5.Start Node1
> 6.Stop master node (Node2)
> 7.Browse the queue using the web UI on new master (Node3). Message 
> summary ok however when clicking on the queue, no message details. An error 
> (see below) is logged by the master, which attempts a restart.
> From this point, the database appears to be corrupted and the same error 
> occurs to each node infinitely (shutdown/restart). The only way around is to 
> stop the nodes and clear the data files.
> However when a message is pushed between step 5 and 6, the error doesn’t 
> occur.
> =
> Leveldb configuration on the 3 instances:
>   
>  directory="${activemq.data}/leveldb"
>   replicas="3"
>   bind="tcp://0.0.0.0:0"
>   zkAddress="zkserver:2181"
>   zkPath="/activemq/leveldb-stores"
>   />
>   
> =
> The error is:
> INFO | Stopping BrokerService[localhost] due to exception, java.io.IOException
> java.io.IOException
> at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:543)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:974)
> at 
> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1270)
> at 
> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1194)
> at 
> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:708)
>at 
> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:741)
> at 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:106)
> at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:258)
> at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:108)
> at 
> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
> at 
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1875)
> at 
> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2086)
> at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1581)
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1198)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1194)
> at 
> org

[jira] [Resolved] (AMQ-4668) REST API only accepts non-form content if content type of text/xml

2013-11-01 Thread Claus Ibsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claus Ibsen resolved AMQ-4668.
--

Resolution: Fixed

Thanks for the patch.

> REST API only accepts non-form content if content type of text/xml
> --
>
> Key: AMQ-4668
> URL: https://issues.apache.org/jira/browse/AMQ-4668
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker, Transport
>Affects Versions: 5.8.0
>Reporter: Chris Robison
>Assignee: Claus Ibsen
>Priority: Minor
> Fix For: 5.10.0
>
> Attachments: MessageServletSupport.java.patch
>
>
> If you don't format the POST content like "body={content}", the servlet will 
> only read the body if the content type is text/xml. I'd like to submit JSON 
> so it is not working. I've included a patch to open it up a bit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (AMQ-4841) lease-database-locker does not use the configured tablePrefix in UPDATE statement

2013-11-01 Thread Claus Ibsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claus Ibsen resolved AMQ-4841.
--

Resolution: Fixed

Thanks Pat for reporting the issue, the workaround, and the test case.

>  lease-database-locker does not use the configured tablePrefix in UPDATE 
> statement
> --
>
> Key: AMQ-4841
> URL: https://issues.apache.org/jira/browse/AMQ-4841
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: - tested on latest trunk
>Reporter: Pat Fox
>Assignee: Claus Ibsen
> Fix For: 5.10.0
>
> Attachments: JDBCLockTablePrefixTest.java
>
>
> Using the configuration
> {code}
> 
>dataSource="#mysql-ds" lockKeepAlivePeriod="5000">
> 
>   
> 
> 
>durableSubAcksTableName="AMQ_ACKS" lockTableName="AMQ_LOCK"/>
> 
> 
>
>  
>   
> 
> {code}
> The logging show the Lock table was created WITH the configured prefix but 
> the lease locker UPDATE statement does not use that prefix
> {code}
> 2013-10-30 14:33:03,245 | DEBUG | Executing SQL: CREATE TABLE TTT_AMQ_LOCK( 
> ID BIGINT NOT NULL, TIME BIGINT, BROKER_NAME VARCHAR(250), PRIMARY KEY (ID) ) 
> ENGINE=INNODB | org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter | 
> main
> ...
> 2013-10-30 14:33:10,889 | DEBUG | jdbcBroker, lease keepAlive Query is UPDATE 
> ACTIVEMQ_LOCK SET BROKER_NAME=?, TIME=? WHERE BROKER_NAME=? AND ID = 1 | 
> org.apache.activemq.store.jdbc.LeaseDatabaseLocker | ActiveMQ JDBC PA 
> Scheduled Task
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (AMQ-4841) lease-database-locker does not use the configured tablePrefix in UPDATE statement

2013-11-01 Thread Claus Ibsen (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13811143#comment-13811143
 ] 

Claus Ibsen commented on AMQ-4841:
--

Yeah its an ordering issue, the locker gets the "default" statements. And no 
the later configured.

>  lease-database-locker does not use the configured tablePrefix in UPDATE 
> statement
> --
>
> Key: AMQ-4841
> URL: https://issues.apache.org/jira/browse/AMQ-4841
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: - tested on latest trunk
>Reporter: Pat Fox
>Assignee: Claus Ibsen
> Fix For: 5.10.0
>
> Attachments: JDBCLockTablePrefixTest.java
>
>
> Using the configuration
> {code}
> 
>dataSource="#mysql-ds" lockKeepAlivePeriod="5000">
> 
>   
> 
> 
>durableSubAcksTableName="AMQ_ACKS" lockTableName="AMQ_LOCK"/>
> 
> 
>
>  
>   
> 
> {code}
> The logging show the Lock table was created WITH the configured prefix but 
> the lease locker UPDATE statement does not use that prefix
> {code}
> 2013-10-30 14:33:03,245 | DEBUG | Executing SQL: CREATE TABLE TTT_AMQ_LOCK( 
> ID BIGINT NOT NULL, TIME BIGINT, BROKER_NAME VARCHAR(250), PRIMARY KEY (ID) ) 
> ENGINE=INNODB | org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter | 
> main
> ...
> 2013-10-30 14:33:10,889 | DEBUG | jdbcBroker, lease keepAlive Query is UPDATE 
> ACTIVEMQ_LOCK SET BROKER_NAME=?, TIME=? WHERE BROKER_NAME=? AND ID = 1 | 
> org.apache.activemq.store.jdbc.LeaseDatabaseLocker | ActiveMQ JDBC PA 
> Scheduled Task
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4841) lease-database-locker does not use the configured tablePrefix in UPDATE statement

2013-11-01 Thread Claus Ibsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claus Ibsen updated AMQ-4841:
-

Fix Version/s: 5.10.0

>  lease-database-locker does not use the configured tablePrefix in UPDATE 
> statement
> --
>
> Key: AMQ-4841
> URL: https://issues.apache.org/jira/browse/AMQ-4841
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: - tested on latest trunk
>Reporter: Pat Fox
>Assignee: Claus Ibsen
> Fix For: 5.10.0
>
> Attachments: JDBCLockTablePrefixTest.java
>
>
> Using the configuration
> {code}
> 
>dataSource="#mysql-ds" lockKeepAlivePeriod="5000">
> 
>   
> 
> 
>durableSubAcksTableName="AMQ_ACKS" lockTableName="AMQ_LOCK"/>
> 
> 
>
>  
>   
> 
> {code}
> The logging show the Lock table was created WITH the configured prefix but 
> the lease locker UPDATE statement does not use that prefix
> {code}
> 2013-10-30 14:33:03,245 | DEBUG | Executing SQL: CREATE TABLE TTT_AMQ_LOCK( 
> ID BIGINT NOT NULL, TIME BIGINT, BROKER_NAME VARCHAR(250), PRIMARY KEY (ID) ) 
> ENGINE=INNODB | org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter | 
> main
> ...
> 2013-10-30 14:33:10,889 | DEBUG | jdbcBroker, lease keepAlive Query is UPDATE 
> ACTIVEMQ_LOCK SET BROKER_NAME=?, TIME=? WHERE BROKER_NAME=? AND ID = 1 | 
> org.apache.activemq.store.jdbc.LeaseDatabaseLocker | ActiveMQ JDBC PA 
> Scheduled Task
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (AMQ-4841) lease-database-locker does not use the configured tablePrefix in UPDATE statement

2013-11-01 Thread Claus Ibsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claus Ibsen reassigned AMQ-4841:


Assignee: Claus Ibsen

>  lease-database-locker does not use the configured tablePrefix in UPDATE 
> statement
> --
>
> Key: AMQ-4841
> URL: https://issues.apache.org/jira/browse/AMQ-4841
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
> Environment: - tested on latest trunk
>Reporter: Pat Fox
>Assignee: Claus Ibsen
> Fix For: 5.10.0
>
> Attachments: JDBCLockTablePrefixTest.java
>
>
> Using the configuration
> {code}
> 
>dataSource="#mysql-ds" lockKeepAlivePeriod="5000">
> 
>   
> 
> 
>durableSubAcksTableName="AMQ_ACKS" lockTableName="AMQ_LOCK"/>
> 
> 
>
>  
>   
> 
> {code}
> The logging show the Lock table was created WITH the configured prefix but 
> the lease locker UPDATE statement does not use that prefix
> {code}
> 2013-10-30 14:33:03,245 | DEBUG | Executing SQL: CREATE TABLE TTT_AMQ_LOCK( 
> ID BIGINT NOT NULL, TIME BIGINT, BROKER_NAME VARCHAR(250), PRIMARY KEY (ID) ) 
> ENGINE=INNODB | org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter | 
> main
> ...
> 2013-10-30 14:33:10,889 | DEBUG | jdbcBroker, lease keepAlive Query is UPDATE 
> ACTIVEMQ_LOCK SET BROKER_NAME=?, TIME=? WHERE BROKER_NAME=? AND ID = 1 | 
> org.apache.activemq.store.jdbc.LeaseDatabaseLocker | ActiveMQ JDBC PA 
> Scheduled Task
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (AMQ-3725) Kahadb error during SAN failover delayed write - Allow kahaDB to recover in a similar manner as the JDBC store using the IOExceptionHandler

2013-11-01 Thread Dejan Bosanac (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13811124#comment-13811124
 ] 

Dejan Bosanac commented on AMQ-3725:


Here's a snapshot containing a fix

https://repository.apache.org/content/repositories/snapshots/org/apache/activemq/apache-activemq/5.10-SNAPSHOT/apache-activemq-5.10-20131101.033855-13-bin.tar.gz

to be tested

> Kahadb error during SAN failover delayed write - Allow kahaDB to recover in a 
> similar manner as the JDBC store using the IOExceptionHandler
> ---
>
> Key: AMQ-3725
> URL: https://issues.apache.org/jira/browse/AMQ-3725
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.5.1
>Reporter: Jason Sherman
> Fix For: 5.10.0
>
> Attachments: AMQ-3725-10112013.txt
>
>
> An issue can arise that causes the broker to terminate when using kahaDB with 
> a SAN, when the SAN fails over.  In this case the failover process is 
> seamless however, on fail back there is a 2-3 sec delay where writes are 
> blocked and the broker terminates.  With the JDBC datastore a similar 
> situation can be handled by using the IOExceptionHandler.  However with 
> kahaDB, when this same IOExceptionHandler is added it prevents the broker 
> from terminating but kahaDB retains an invalid index.
> {code}
>  INFO | ActiveMQ JMS Message Broker (Broker1, 
> ID:macbookpro-251a.home-56915-1328715089252-0:1) started
>  INFO | jetty-7.1.6.v20100715
>  INFO | ActiveMQ WebConsole initialized.
>  INFO | Initializing Spring FrameworkServlet 'dispatcher'
>  INFO | ActiveMQ Console at http://0.0.0.0:8161/admin
>  INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo
>  INFO | RESTful file access application at http://0.0.0.0:8161/fileserver
>  INFO | FUSE Web Console at http://0.0.0.0:8161/console
>  INFO | Started SelectChannelConnector@0.0.0.0:8161
> ERROR | KahaDB failed to store to Journal
> java.io.SyncFailedException: sync failed
>   at java.io.FileDescriptor.sync(Native Method)
>   at 
> org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:382)
>   at 
> org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:203)
>  INFO | Ignoring IO exception, java.io.SyncFailedException: sync failed
> java.io.SyncFailedException: sync failed
>   at java.io.FileDescriptor.sync(Native Method)
>   at 
> org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:382)
>   at 
> org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:203)
> ERROR | Checkpoint failed
> java.io.SyncFailedException: sync failed
>   at java.io.FileDescriptor.sync(Native Method)
>   at 
> org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:382)
>   at 
> org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:203)
>  INFO | Ignoring IO exception, java.io.SyncFailedException: sync failed
> java.io.SyncFailedException: sync failed
>   at java.io.FileDescriptor.sync(Native Method)
>   at 
> org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:382)
>   at 
> org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:203)
> ERROR | KahaDB failed to store to Journal
> java.io.FileNotFoundException: /Volumes/NAS-01/data/kahadb/db-1.log (No such 
> file or directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:216)
>   at 
> org.apache.kahadb.journal.DataFile.openRandomAccessFile(DataFile.java:70)
>   at 
> org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:324)
>   at 
> org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:203)
>  INFO | Ignoring IO exception, java.io.FileNotFoundException: 
> /Volumes/NAS-01/data/kahadb/db-1.log (No such file or directory)
> java.io.FileNotFoundException: /Volumes/NAS-01/data/kahadb/db-1.log (No such 
> file or directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:216)
>   at 
> org.apache.kahadb.journal.DataFile.openRandomAccessFile(DataFile.java:70)
>   at 
> org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:324)
>   at 
> org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:203)
> ERROR | KahaDB failed to store to Journal
> java.io.FileNotFoundException: /Volumes/NAS-01/data/kahadb/db-1.log (No such 
> file or directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:216)
>   at 
> org.apache.kahadb.journal.DataFile.op