[jira] [Commented] (AMQNET-413) Message producers do not respect DTC Transactions correctly

2013-05-20 Thread Daniel Marbach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQNET-413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662703#comment-13662703
 ] 

Daniel Marbach commented on AMQNET-413:
---

Why is it closed when incomplete???

> Message producers do not respect DTC Transactions correctly
> ---
>
> Key: AMQNET-413
> URL: https://issues.apache.org/jira/browse/AMQNET-413
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: ActiveMQ
>Reporter: Remo Gloor
>Assignee: Jim Gomes
> Attachments: allDTCImprovments.patch, 
> AllMessagesAreAcknowledgedAndRolledbackIndependentOfTheTransaction.patch, 
> AMQNET-413.patch
>
>
> When consuming messages in a transaction and sending new ones during 
> processing of that message and the transaction is rolled back and commited on 
> retry the number of published messages should be equal to the received one.
> But the number of sent message is bigger than the number of received ones. 
> This means some of the message sends are not rolled back others are.
> EDIT: Further analysis have shown that the TransactionContext.TransactionId 
> is null when sending eventhough a transaction is in progress and not yet 
> completed. It must incorrectly be assigned to null somewhere.
> The following application demonstrates the problem when enqueuing 100+ 
> messages to foo.bar
> class Program
> {
> private static INetTxSession activeMqSession;
> private static IMessageConsumer consumer;
> private static INetTxConnection connection;
> static void Main(string[] args)
> {
> using (connection = CreateActiveMqConnection())
> using (activeMqSession = connection.CreateNetTxSession())
> using (consumer = 
> activeMqSession.CreateConsumer(SessionUtil.GetQueue(activeMqSession, 
> "queue://foo.bar")))
> {
> connection.Start();
> while (true)
> {
> try
> {
> using (TransactionScope scoped = new 
> TransactionScope(TransactionScopeOption.RequiresNew))
> {
> IMessage msg = null;
> while (msg == null)
> {
> msg = consumer.ReceiveNoWait();
> }
> OnMessage(msg);
> scoped.Complete();
> }
> }
> catch(Exception exception) {}
> }
> }
> }
> private static INetTxConnection CreateActiveMqConnection()
> {
> var connectionFactory = new 
> Apache.NMS.ActiveMQ.NetTxConnectionFactory("activemq:tcp://localhost:61616")
> {
> AcknowledgementMode = AcknowledgementMode.Transactional
> };
> return connectionFactory.CreateNetTxConnection();
> }
> private static void OnMessage(IMessage message)
> {
> var x = new TestSinglePhaseCommit();
> Console.WriteLine("Processing message {0} in transaction {1} - 
> {2}", message.NMSMessageId, 
> Transaction.Current.TransactionInformation.LocalIdentifier, 
> Transaction.Current.TransactionInformation.DistributedIdentifier);
> var session2 = activeMqSession;
> {
> Transaction.Current.EnlistDurable(Guid.NewGuid(), x, 
> EnlistmentOptions.None);
> using (var producer = 
> session2.CreateProducer(SessionUtil.GetQueue(session2, "queue://foo.baz")))
> {
> producer.Send(new ActiveMQTextMessage("foo"));
> }
> if (!message.NMSRedelivered) throw new Exception();
> }
> }
> }
> internal class TestSinglePhaseCommit : ISinglePhaseNotification
> {
> public void Prepare(PreparingEnlistment preparingEnlistment)
> {
> preparingEnlistment.Prepared();
> }
> public void Commit(Enlistment enlistment)
> {
> enlistment.Done();
> }
> public void Rollback(Enlistment enlistment)
> {
> enlistment.Done();
> }
> public void InDoubt(Enlistment enlistment)
> {
> enlistment.Done();
> }
> public void SinglePhaseCommit(SinglePhaseEnlistment 
> singlePhaseEnlistment)
> {
> singlePhaseEnlistment.Committed();
> }
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jir

[VOTE] Release Apache.NMS API v1.6.0

2013-05-20 Thread Timothy Bish

Hello

This is a call for a vote on the release of the Apache.NMS API v1.6.0.  This 
package contains the API for the various Apache.NMS clients along with utility 
classes and unit test cases against the abstract NMS API.

Changes in this version include:

* Added Recover method to ISession.
* Added new events to ISession for Transaction begin, commit and rollback.
* Added method in IConnection to purge created Temp Destinations.
* A few minor bug fixes for the common code bits.

The binaries and source bundles for the Release Candidate can be found here:


The Wiki Page for this release is here:


Please cast your votes (the vote will be open for 72 hrs):

[ ] +1 Release the source as Apache NMS API 1.6.0
[ ] -1 Veto the release (provide specific comments)

Here's my +1

Regards,

--
Tim Bish
Sr Software Engineer | RedHat Inc.
tim.b...@redhat.com | www.fusesource.com | www.redhat.com
skype: tabish121 | twitter: @tabish121
blog: http://timbish.blogspot.com/

www.camelone.org : The open source integration conference:



[jira] [Updated] (AMQNET-417) DTC Recovery should be done once for each application start

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-417:


Priority: Minor  (was: Critical)

> DTC Recovery should be done once for each application start
> ---
>
> Key: AMQNET-417
> URL: https://issues.apache.org/jira/browse/AMQNET-417
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Remo Gloor
>Assignee: Jim Gomes
>Priority: Minor
> Attachments: allDTCImprovments.patch, 
> DtcRecoveryShouldNotRunAfterConnectionsAreStarted.patch
>
>
> DTC recovery is currently executed when a new session is created. This is not 
> correct because there can be other sessions that are currently commiting a 
> transaction. This transactions must not be recovered, otherwise there are 
> strange behaviors.
> The recovery should be done just once foreach ressource manager ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-417) DTC Recovery should be done once for each application start

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-417:


Issue Type: New Feature  (was: Bug)

> DTC Recovery should be done once for each application start
> ---
>
> Key: AMQNET-417
> URL: https://issues.apache.org/jira/browse/AMQNET-417
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Remo Gloor
>Assignee: Jim Gomes
>Priority: Critical
> Attachments: allDTCImprovments.patch, 
> DtcRecoveryShouldNotRunAfterConnectionsAreStarted.patch
>
>
> DTC recovery is currently executed when a new session is created. This is not 
> correct because there can be other sessions that are currently commiting a 
> transaction. This transactions must not be recovered, otherwise there are 
> strange behaviors.
> The recovery should be done just once foreach ressource manager ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-422) Added support for transactions for Asyncronous Listeners

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-422:


Priority: Minor  (was: Major)

> Added support for transactions for Asyncronous Listeners
> 
>
> Key: AMQNET-422
> URL: https://issues.apache.org/jira/browse/AMQNET-422
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Remo Gloor
>Assignee: Jim Gomes
>Priority: Minor
> Attachments: 
> AddedSupportForAmbientTransactionForAsyncConsumers.patch, 
> allDTCImprovments.patch
>
>
> Asyncronous Listeners do not support transactions properly. I suggest to add 
> the option to register a callback that can be used to create a transaction 
> for each message received by the asyncronous listener.
> e.g.
> ((MessageConsumer)consumer).CreateTransactionScopeForAsyncMessage = 
> this.CreateScope;
> private TransactionScope CreateScope()
> {
> return new TransactionScope(TransactionScopeOption.RequiresNew);
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-418) Recovery File Logger does not support multiple concurrent transactions

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-418:


Priority: Minor  (was: Major)

> Recovery File Logger does not support multiple concurrent transactions
> --
>
> Key: AMQNET-418
> URL: https://issues.apache.org/jira/browse/AMQNET-418
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Remo Gloor
>Assignee: Jim Gomes
>Priority: Minor
> Attachments: allDTCImprovments.patch, 
> RecoveryLoggerDoesNotSupportMultipleTransactions.patch
>
>
> Currently it is not possible to use more than one session if you use DTC 
> Transactions. This is because the RecoveryFileLogger can not handle more than 
> one transaction simultanously.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-418) Recovery File Logger does not support multiple concurrent transactions

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-418:


Issue Type: New Feature  (was: Bug)

> Recovery File Logger does not support multiple concurrent transactions
> --
>
> Key: AMQNET-418
> URL: https://issues.apache.org/jira/browse/AMQNET-418
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Remo Gloor
>Assignee: Jim Gomes
> Attachments: allDTCImprovments.patch, 
> RecoveryLoggerDoesNotSupportMultipleTransactions.patch
>
>
> Currently it is not possible to use more than one session if you use DTC 
> Transactions. This is because the RecoveryFileLogger can not handle more than 
> one transaction simultanously.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (AMQNET-413) Message producers do not respect DTC Transactions correctly

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQNET-413.
---

Resolution: Incomplete

This may be fixed, hard to tell.  No definitive NUnit test case. 

> Message producers do not respect DTC Transactions correctly
> ---
>
> Key: AMQNET-413
> URL: https://issues.apache.org/jira/browse/AMQNET-413
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: ActiveMQ
>Reporter: Remo Gloor
>Assignee: Jim Gomes
> Attachments: allDTCImprovments.patch, 
> AllMessagesAreAcknowledgedAndRolledbackIndependentOfTheTransaction.patch, 
> AMQNET-413.patch
>
>
> When consuming messages in a transaction and sending new ones during 
> processing of that message and the transaction is rolled back and commited on 
> retry the number of published messages should be equal to the received one.
> But the number of sent message is bigger than the number of received ones. 
> This means some of the message sends are not rolled back others are.
> EDIT: Further analysis have shown that the TransactionContext.TransactionId 
> is null when sending eventhough a transaction is in progress and not yet 
> completed. It must incorrectly be assigned to null somewhere.
> The following application demonstrates the problem when enqueuing 100+ 
> messages to foo.bar
> class Program
> {
> private static INetTxSession activeMqSession;
> private static IMessageConsumer consumer;
> private static INetTxConnection connection;
> static void Main(string[] args)
> {
> using (connection = CreateActiveMqConnection())
> using (activeMqSession = connection.CreateNetTxSession())
> using (consumer = 
> activeMqSession.CreateConsumer(SessionUtil.GetQueue(activeMqSession, 
> "queue://foo.bar")))
> {
> connection.Start();
> while (true)
> {
> try
> {
> using (TransactionScope scoped = new 
> TransactionScope(TransactionScopeOption.RequiresNew))
> {
> IMessage msg = null;
> while (msg == null)
> {
> msg = consumer.ReceiveNoWait();
> }
> OnMessage(msg);
> scoped.Complete();
> }
> }
> catch(Exception exception) {}
> }
> }
> }
> private static INetTxConnection CreateActiveMqConnection()
> {
> var connectionFactory = new 
> Apache.NMS.ActiveMQ.NetTxConnectionFactory("activemq:tcp://localhost:61616")
> {
> AcknowledgementMode = AcknowledgementMode.Transactional
> };
> return connectionFactory.CreateNetTxConnection();
> }
> private static void OnMessage(IMessage message)
> {
> var x = new TestSinglePhaseCommit();
> Console.WriteLine("Processing message {0} in transaction {1} - 
> {2}", message.NMSMessageId, 
> Transaction.Current.TransactionInformation.LocalIdentifier, 
> Transaction.Current.TransactionInformation.DistributedIdentifier);
> var session2 = activeMqSession;
> {
> Transaction.Current.EnlistDurable(Guid.NewGuid(), x, 
> EnlistmentOptions.None);
> using (var producer = 
> session2.CreateProducer(SessionUtil.GetQueue(session2, "queue://foo.baz")))
> {
> producer.Send(new ActiveMQTextMessage("foo"));
> }
> if (!message.NMSRedelivered) throw new Exception();
> }
> }
> }
> internal class TestSinglePhaseCommit : ISinglePhaseNotification
> {
> public void Prepare(PreparingEnlistment preparingEnlistment)
> {
> preparingEnlistment.Prepared();
> }
> public void Commit(Enlistment enlistment)
> {
> enlistment.Done();
> }
> public void Rollback(Enlistment enlistment)
> {
> enlistment.Done();
> }
> public void InDoubt(Enlistment enlistment)
> {
> enlistment.Done();
> }
> public void SinglePhaseCommit(SinglePhaseEnlistment 
> singlePhaseEnlistment)
> {
> singlePhaseEnlistment.Committed();
> }
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (AMQNET-412) Messages are enlisted to the wrong transaction

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved AMQNET-412.
-

   Resolution: Fixed
Fix Version/s: 1.6.0
 Assignee: Timothy Bish  (was: Jim Gomes)

Fixed on trunk. 

> Messages are enlisted to the wrong transaction
> --
>
> Key: AMQNET-412
> URL: https://issues.apache.org/jira/browse/AMQNET-412
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: ActiveMQ
> Environment: Apache.NMS.ActiveMq 1.5.7
>Reporter: Remo Gloor
>Assignee: Timothy Bish
>Priority: Critical
> Fix For: 1.6.0
>
> Attachments: allDTCImprovments.patch, 
> MessagesAreEnlistedToTheWrongTransaction.patch
>
>
> Under load active mq enlists a message to a previous transactions. This leads 
> to very strange behaviors:
> - Database is updated and message is rolled back
> - Message is completed but database rolledback
> All this results in an invalid system state making. DTC is not usable this 
> way.
> Analysis of the source code have shown that the problem is in 
> NetTxSession.DoStartTransaction There it checks if a .NET Transaction in the 
> TransactionContext. In this case it adds the message to that transaction. But 
> this can be the previous transaction because DTC 2-PhaseCommit is 
> asyncronous. It needs to check if the Current Transaction is the same as one 
> before and wait if they do not match.
> The following applacation demonstrates the problem when enough messages are 
> processed E.g. enqueue 100 msg in foo.bar. It is basically 
> TestRedeliveredCase3 but with half of the messages failing. 
> Whenever a SinglePhaseCommit occurs in the TestSinglePhaseCommit this means 
> the database would be commited in an own transaction. 
> class Program
> {
> private static INetTxSession activeMqSession;
> private static IMessageConsumer consumer;
> private static INetTxConnection connection;
> static void Main(string[] args)
> {
> using (connection = CreateActiveMqConnection())
> using (activeMqSession = connection.CreateNetTxSession())
> using (consumer = 
> activeMqSession.CreateConsumer(SessionUtil.GetQueue(activeMqSession, 
> "queue://foo.bar")))
> {
> connection.Start();
> while (true)
> {
> try
> {
> using (TransactionScope scoped = new 
> TransactionScope(TransactionScopeOption.RequiresNew))
> {
> IMessage msg = null;
> while (msg == null)
> {
> msg = consumer.ReceiveNoWait();
> }
> OnMessage(msg);
> scoped.Complete();
> }
> }
> catch(Exception exception) {}
> }
> }
> }
> private static INetTxConnection CreateActiveMqConnection()
> {
> var connectionFactory = new 
> Apache.NMS.ActiveMQ.NetTxConnectionFactory("activemq:tcp://localhost:61616")
> {
> AcknowledgementMode = AcknowledgementMode.Transactional
> };
> return connectionFactory.CreateNetTxConnection();
> }
> private static void OnMessage(IMessage message)
> {
> var x = new TestSinglePhaseCommit();
> var session2 = activeMqSession;
> {
> Transaction.Current.EnlistDurable(Guid.NewGuid(), x, 
> EnlistmentOptions.None);
> using (var producer = 
> session2.CreateProducer(SessionUtil.GetQueue(session2, "queue://foo.baz")))
> {
> producer.Send(new ActiveMQTextMessage("foo"));
> }
> if (new Random().Next(2) == 0) throw new Exception();
> }
> }
> }
> internal class TestSinglePhaseCommit : ISinglePhaseNotification
> {
> public void Prepare(PreparingEnlistment preparingEnlistment)
> {
> Console.WriteLine("Tx Prepare");
> preparingEnlistment.Prepared();
> }
> public void Commit(Enlistment enlistment)
> {
> Console.WriteLine("Tx Commit");
> enlistment.Done();
> }
> public void Rollback(Enlistment enlistment)
> {
> Console.WriteLine("Tx Rollback");
> enlistment.Done();
> }
> public void InDoubt(Enlistment enlistment)
> {
> Console.WriteLine("Tx InDoubt");
> enlistment.Done();
> }
>  

[jira] [Commented] (AMQ-4546) kahadbstore nullpointerexception after restart

2013-05-20 Thread Matt Baker (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662303#comment-13662303
 ] 

Matt Baker commented on AMQ-4546:
-

What more logs can I provide?  I have logging set at debug.  Only happens after 
a couple of restarts so I can't give a reliable set of steps.  It happens after 
2-3 restarts.  Not always, but it eventually will always fail after a few 
restarts.  

If I delete the database completely it goes away (obviously).

This is my log4j setting.

log4j.rootLogger=DEBUG, out



These are the lines of code where I setup my broker:

broker = new BrokerService();
StatisticsBrokerPlugin statisticsPlugin = new 
StatisticsBrokerPlugin();
broker.setPlugins(new BrokerPlugin[] { statisticsPlugin });

String dataDirectory = getDefaultDataDirectory();
broker.setDataDirectory(dataDirectory);
broker.setDeleteAllMessagesOnStartup(true);
File tmpDirectory = new File(getTempDataDirectory());
broker.setTmpDataDirectory(tmpDirectory);
boolean enableJMX = Boolean.getBoolean("enableJMX");
broker.setUseJmx(enableJMX);

try {
// by default activemq would try to start jmx rmi 
server on port
// 1099, the below code would help to work around it.
String str = 
System.getProperty("activemq.jmx.rmi.port");
if (str != null) {
int jmxRMIPort = Integer.parseInt(str);
ManagementContext managementContext = new 
ManagementContext();
managementContext.setConnectorPort(jmxRMIPort);
broker.setManagementContext(managementContext);
}
} catch (Exception e) {
}
// this is default...but just make sure since some messages are 
too
// big too keep directly in memory
broker.setPersistent(true);

// broker name must be setup based on domain and container 
name...
// we don't use the value from the activemq.xml file...so we 
force it
// here
String brokerName = getBrokerName();
broker.setBrokerName(brokerName);

// advisory support MUST be enabled for stuff to work right
broker.setAdvisorySupport(true);

// bunch of camel stuff in here that I deleted...not sure if you need that.

broker.start();
broker.waitUntilStarted();
  


> kahadbstore nullpointerexception after restart
> --
>
> Key: AMQ-4546
> URL: https://issues.apache.org/jira/browse/AMQ-4546
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0
>Reporter: Matt Baker
>
> Received a null pointer exception after restarting activemq broker (embedded).
> First few messages are ok, then this happens and the broker (using network 
> connector) starts to fail indicating remote exceptions.
> [//fathom1.win-fiaflosoa0a#43-1] ServiceDEBUG Error 
> occured while processing sync command: Consu
> merInfo {commandId = 4, responseRequired = true, consumerId = 
> ID:WIN-FIAFLOSOA0A-55945-1369075855975-4:22:1:1, destinati
> on = queue://fathom1.win-fiaflosoa0a, prefetchSize = 1, 
> maximumPendingMessageLimit = 0, browser = false, dispatchAsync =
>  true, selector = null, subscriptionName = null, noLocal = false, exclusive = 
> false, retroactive = false, priority = 0,
> brokerPath = null, optimizedAcknowledge = false, noRangeAcks = false, 
> additionalPredicate = null}, exception: java.lang.
> NullPointerException
> java.lang.NullPointerException
> at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.getMessageCount(KahaDBStore.java:478)
> at 
> org.apache.activemq.store.ProxyMessageStore.getMessageCount(ProxyMessageStore.java:101)
> at org.apache.activemq.broker.region.Queue.initialize(Queue.java:376)
> at 
> org.apache.activemq.broker.region.DestinationFactoryImpl.createDestination(DestinationFactoryImpl.java:87)
> at 
> org.apache.activemq.broker.region.AbstractRegion.createDestination(AbstractRegion.java:526)
> at 
> org.apache.activemq.broker.region.AbstractRegion.addDestination(AbstractRegion.java:136)
> at 
> org.apache.activemq.broker.region.RegionBroker.addDestination(RegionBroker.java:277)
> at 
> org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
> at 
> org.apache.activemq.advisory.AdvisoryBroker.addDestination(AdvisoryBroker.java:17

[jira] [Commented] (AMQ-4487) java.lang.OutOfMemoryError: Java heap space

2013-05-20 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662300#comment-13662300
 ] 

Timothy Bish commented on AMQ-4487:
---

Create a unit test that can reproduce the issue if possible. 

> java.lang.OutOfMemoryError: Java heap space
> ---
>
> Key: AMQ-4487
> URL: https://issues.apache.org/jira/browse/AMQ-4487
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0
> Environment: OS - Linux 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 
> 2010 x86_64 x86_64 x86_64 GNU/Linux
> Activemq - 5.8
>Reporter: Subathra Jayaraman
>
> Hi,
> When we browse a queue in webconsole we are getting 
> java.lang.OutOfMemoryError: Java heap space. 
> Memory allocation -> -Xms512m -Xmx3G
> When we try to click the queue to view the messages below error is occurring. 
> We recently moved from 5.7 to 5.8 version. We dint face this issue in 5.7 
> version.
> Kindly help in fixing the issue.
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2882)
> at java.io.CharArrayWriter.write(CharArrayWriter.java:88)
> at java.io.PrintWriter.write(PrintWriter.java:382)
> at 
> com.opensymphony.module.sitemesh.filter.RoutablePrintWriter.write(RoutablePrintWriter.java:144)
> at 
> org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:181)
> at 
> org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:449)
> at 
> org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:462)
> at 
> org.apache.jsp.browse_jsp$browse_jspHelper.invoke0(org.apache.jsp.browse_jsp:382)
> at 
> org.apache.jsp.browse_jsp$browse_jspHelper.invoke(org.apache.jsp.browse_jsp:450)
> at 
> org.apache.jsp.tag.web.jms.forEachMessage_tag.doTag(org.apache.jsp.tag.web.jms.forEachMessage_tag:89)
> at 
> org.apache.jsp.browse_jsp._jspx_meth_jms_forEachMessage_0(org.apache.jsp.browse_jsp:170)
> at 
> org.apache.jsp.browse_jsp._jspService(org.apache.jsp.browse_jsp:100)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:109)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
> at 
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:389)
> at 
> org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:486)
> at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:380)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1329)
> at 
> org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:83)
> at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1300)
> at 
> org.apache.activemq.web.SessionFilter.doFilter(SessionFilter.java:45)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1300)
> at 
> org.apache.activemq.web.filter.ApplicationContextFilter.doFilter(ApplicationContextFilter.java:102)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1300)
> at 
> com.opensymphony.sitemesh.webapp.SiteMeshFilter.obtainContent(SiteMeshFilter.java:129)
> at 
> com.opensymphony.sitemesh.webapp.SiteMeshFilter.doFilter(SiteMeshFilter.java:77)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1300)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:445)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> Thank you.
> Regards,
> Subathra.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4487) java.lang.OutOfMemoryError: Java heap space

2013-05-20 Thread Ilia Stepanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662298#comment-13662298
 ] 

Ilia Stepanov commented on AMQ-4487:


I have same issue with 5.8.0. I checked the heap dump and debugged the web 
console, hope this will help to fix it. Please provide an update.

In the heap dump I found an instance of VMPendingMessageCursor holding 18+ 
millions of PendingNode elements (the queue had actually just two hundreds 
message). 

I did debugging - the method Queue.iterate() is repeated in an endless loop.

In first run it adds 200 messages to the browser. The second run should 
normally add no new messages and remove the browserDispatch from the 
browserDispatches. However this does not happen 
- the if (!node.isAcked() && 
!browser.getPending().getMessageAudit().isDuplicate(node.getMessageId()))  
returns true again and messages are added again. 

The third run adds messages again and so on. Messages are added until OOM 
occurs.
I found it strange that method ActiveMQMessageAuditNoSync.isDuplicate() returns 
false in the second iteration and checked it. 

public boolean isDuplicate(final MessageId id) {
boolean answer = false;

if (id != null) {
ProducerId pid = id.getProducerId();
if (pid != null) {
BitArrayBin bab = map.get(pid);  << here the bab is null in the 
second iteration. why? it should been added in the first iteration
if (bab == null) {
bab = new BitArrayBin(auditDepth);
map.put(pid, bab);   << here new entry is added to 
the map, but the size of keySet() is NOT increased!
modified = true; << here  map.get(pid) returns 
a coorect value in the debugger. 
 << However in the next 
iteration it returns null again...
}
answer = bab.setBit(id.getProducerSequenceId(), true);
}
}
return answer;
}
It looks like a collision in the map. Does ProducerId comes with a proper 
hashCode() and equals() methods? 



> java.lang.OutOfMemoryError: Java heap space
> ---
>
> Key: AMQ-4487
> URL: https://issues.apache.org/jira/browse/AMQ-4487
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0
> Environment: OS - Linux 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 
> 2010 x86_64 x86_64 x86_64 GNU/Linux
> Activemq - 5.8
>Reporter: Subathra Jayaraman
>
> Hi,
> When we browse a queue in webconsole we are getting 
> java.lang.OutOfMemoryError: Java heap space. 
> Memory allocation -> -Xms512m -Xmx3G
> When we try to click the queue to view the messages below error is occurring. 
> We recently moved from 5.7 to 5.8 version. We dint face this issue in 5.7 
> version.
> Kindly help in fixing the issue.
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2882)
> at java.io.CharArrayWriter.write(CharArrayWriter.java:88)
> at java.io.PrintWriter.write(PrintWriter.java:382)
> at 
> com.opensymphony.module.sitemesh.filter.RoutablePrintWriter.write(RoutablePrintWriter.java:144)
> at 
> org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:181)
> at 
> org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:449)
> at 
> org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:462)
> at 
> org.apache.jsp.browse_jsp$browse_jspHelper.invoke0(org.apache.jsp.browse_jsp:382)
> at 
> org.apache.jsp.browse_jsp$browse_jspHelper.invoke(org.apache.jsp.browse_jsp:450)
> at 
> org.apache.jsp.tag.web.jms.forEachMessage_tag.doTag(org.apache.jsp.tag.web.jms.forEachMessage_tag:89)
> at 
> org.apache.jsp.browse_jsp._jspx_meth_jms_forEachMessage_0(org.apache.jsp.browse_jsp:170)
> at 
> org.apache.jsp.browse_jsp._jspService(org.apache.jsp.browse_jsp:100)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:109)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
> at 
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:389)
> at 
> org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:486)
> at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:380)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1329)
> at 
> org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.ja

[jira] [Commented] (AMQ-4546) kahadbstore nullpointerexception after restart

2013-05-20 Thread Christian Posta (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662292#comment-13662292
 ] 

Christian Posta commented on AMQ-4546:
--

Can you create a unit test to reproduce this? Or at the very least give broker 
configs, more detailed logs, and steps to reliably reproduce this? Looks like 
the index pageFile is null, but no way to know how that happened. 

> kahadbstore nullpointerexception after restart
> --
>
> Key: AMQ-4546
> URL: https://issues.apache.org/jira/browse/AMQ-4546
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0
>Reporter: Matt Baker
>
> Received a null pointer exception after restarting activemq broker (embedded).
> First few messages are ok, then this happens and the broker (using network 
> connector) starts to fail indicating remote exceptions.
> [//fathom1.win-fiaflosoa0a#43-1] ServiceDEBUG Error 
> occured while processing sync command: Consu
> merInfo {commandId = 4, responseRequired = true, consumerId = 
> ID:WIN-FIAFLOSOA0A-55945-1369075855975-4:22:1:1, destinati
> on = queue://fathom1.win-fiaflosoa0a, prefetchSize = 1, 
> maximumPendingMessageLimit = 0, browser = false, dispatchAsync =
>  true, selector = null, subscriptionName = null, noLocal = false, exclusive = 
> false, retroactive = false, priority = 0,
> brokerPath = null, optimizedAcknowledge = false, noRangeAcks = false, 
> additionalPredicate = null}, exception: java.lang.
> NullPointerException
> java.lang.NullPointerException
> at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.getMessageCount(KahaDBStore.java:478)
> at 
> org.apache.activemq.store.ProxyMessageStore.getMessageCount(ProxyMessageStore.java:101)
> at org.apache.activemq.broker.region.Queue.initialize(Queue.java:376)
> at 
> org.apache.activemq.broker.region.DestinationFactoryImpl.createDestination(DestinationFactoryImpl.java:87)
> at 
> org.apache.activemq.broker.region.AbstractRegion.createDestination(AbstractRegion.java:526)
> at 
> org.apache.activemq.broker.region.AbstractRegion.addDestination(AbstractRegion.java:136)
> at 
> org.apache.activemq.broker.region.RegionBroker.addDestination(RegionBroker.java:277)
> at 
> org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
> at 
> org.apache.activemq.advisory.AdvisoryBroker.addDestination(AdvisoryBroker.java:174)
> at 
> org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
> at 
> org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
> at 
> org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
> at 
> org.apache.activemq.broker.MutableBrokerFilter.addDestination(MutableBrokerFilter.java:151)
> at 
> org.apache.activemq.broker.region.AbstractRegion.lookup(AbstractRegion.java:452)
> at 
> org.apache.activemq.broker.region.AbstractRegion.addConsumer(AbstractRegion.java:265)
> at 
> org.apache.activemq.broker.region.RegionBroker.addConsumer(RegionBroker.java:353)
> at 
> org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
> at 
> org.apache.activemq.advisory.AdvisoryBroker.addConsumer(AdvisoryBroker.java:91)
> at 
> org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
> at 
> org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
> at 
> org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
> at 
> org.apache.activemq.broker.MutableBrokerFilter.addConsumer(MutableBrokerFilter.java:95)
> at 
> org.apache.activemq.broker.TransportConnection.processAddConsumer(TransportConnection.java:619)
> at 
> org.apache.activemq.command.ConsumerInfo.visit(ConsumerInfo.java:332)
> at 
> org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:329)
> at 
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:184)
> at 
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> at 
> org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:241)
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav

[jira] [Created] (AMQ-4546) kahadbstore nullpointerexception after restart

2013-05-20 Thread Matt Baker (JIRA)
Matt Baker created AMQ-4546:
---

 Summary: kahadbstore nullpointerexception after restart
 Key: AMQ-4546
 URL: https://issues.apache.org/jira/browse/AMQ-4546
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.8.0
Reporter: Matt Baker


Received a null pointer exception after restarting activemq broker (embedded).

First few messages are ok, then this happens and the broker (using network 
connector) starts to fail indicating remote exceptions.

[//fathom1.win-fiaflosoa0a#43-1] ServiceDEBUG Error 
occured while processing sync command: Consu
merInfo {commandId = 4, responseRequired = true, consumerId = 
ID:WIN-FIAFLOSOA0A-55945-1369075855975-4:22:1:1, destinati
on = queue://fathom1.win-fiaflosoa0a, prefetchSize = 1, 
maximumPendingMessageLimit = 0, browser = false, dispatchAsync =
 true, selector = null, subscriptionName = null, noLocal = false, exclusive = 
false, retroactive = false, priority = 0,
brokerPath = null, optimizedAcknowledge = false, noRangeAcks = false, 
additionalPredicate = null}, exception: java.lang.
NullPointerException
java.lang.NullPointerException
at 
org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.getMessageCount(KahaDBStore.java:478)
at 
org.apache.activemq.store.ProxyMessageStore.getMessageCount(ProxyMessageStore.java:101)
at org.apache.activemq.broker.region.Queue.initialize(Queue.java:376)
at 
org.apache.activemq.broker.region.DestinationFactoryImpl.createDestination(DestinationFactoryImpl.java:87)
at 
org.apache.activemq.broker.region.AbstractRegion.createDestination(AbstractRegion.java:526)
at 
org.apache.activemq.broker.region.AbstractRegion.addDestination(AbstractRegion.java:136)
at 
org.apache.activemq.broker.region.RegionBroker.addDestination(RegionBroker.java:277)
at 
org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
at 
org.apache.activemq.advisory.AdvisoryBroker.addDestination(AdvisoryBroker.java:174)
at 
org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
at 
org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
at 
org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
at 
org.apache.activemq.broker.MutableBrokerFilter.addDestination(MutableBrokerFilter.java:151)
at 
org.apache.activemq.broker.region.AbstractRegion.lookup(AbstractRegion.java:452)
at 
org.apache.activemq.broker.region.AbstractRegion.addConsumer(AbstractRegion.java:265)
at 
org.apache.activemq.broker.region.RegionBroker.addConsumer(RegionBroker.java:353)
at 
org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
at 
org.apache.activemq.advisory.AdvisoryBroker.addConsumer(AdvisoryBroker.java:91)
at 
org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
at 
org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
at 
org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89)
at 
org.apache.activemq.broker.MutableBrokerFilter.addConsumer(MutableBrokerFilter.java:95)
at 
org.apache.activemq.broker.TransportConnection.processAddConsumer(TransportConnection.java:619)
at org.apache.activemq.command.ConsumerInfo.visit(ConsumerInfo.java:332)
at 
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:329)
at 
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:184)
at 
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
at 
org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:241)
at 
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
at 
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
[ Thread-12] DefaultComponent   DEBUG Creating 
endpoint uri=[jms://topic:progress.opened
ge.management.notification.fathom1.win-fiaflosoa0a], 
path=[topic:progress.openedge.management.notification.fathom1.win-f
iaflosoa0a], parameters=[{}]
[ Thread-12] DefaultCamelContextDEBUG 
jms://topic:progress.openedge.management.notificat
ion.fathom1.win-fiaflosoa0a converted to endpoint: 
Endpoint[jms://topic:progress.openedge.management.notification.fathom
1.win-fiaflosoa0a] by component: 
org.apache.activemq.cam

[jira] [Commented] (AMQ-4338) MQTTSSLTest has multiple test cases that fail frequently

2013-05-20 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662198#comment-13662198
 ] 

Timothy Bish commented on AMQ-4338:
---

This one looks like its fixed now with the latest MQTT client pulled into the 
build. 

> MQTTSSLTest has multiple test cases that fail frequently
> 
>
> Key: AMQ-4338
> URL: https://issues.apache.org/jira/browse/AMQ-4338
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Test Cases
>Reporter: Kevin Earls
>Priority: Minor
> Attachments: AMQ-4338A.patch, AMQ-4338.patch
>
>
> MQTTSSLTest has multiple different test cases (including 
> testSendAndReceiveExactlyOnce, testSendAndReceiveLargeMessages, 
> testSendAndReceiveMQTT, testSendAtLeastOnceReceiveAtMostOnce, 
> testSendAtLeastOnceReceiveExactlyOnce, testSendJMSReceiveMQTT, 
> testSendMQTTReceiveJMS) which fail fairly frequently because of a hang on the 
> provider.connect() call in initializeConnection() as shown in the stacktrace 
> below. 
> Another problem with this test is it was giving a misleading error when run 
> under Hudson, showing that the test that followed it (MQTTTest) was failing 
> instead.  I think this was because of the way it was using 
> AutoFailTestSupport.  I will attach a patch which removes that and uses 
> timeouts on @Test annotations instead.
> testSendAndReceiveLargeMessages(org.apache.activemq.transport.mqtt.MQTTSSLTest)
>   Time elapsed: 30.004 sec  <<< ERROR!
> java.lang.Exception: test timed out after 3 milliseconds
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
> at org.fusesource.mqtt.client.Promise.await(Promise.java:88)
> at 
> org.fusesource.mqtt.client.BlockingConnection.connect(BlockingConnection.java:49)
> at 
> org.apache.activemq.transport.mqtt.FuseMQQTTClientProvider.connect(FuseMQQTTClientProvider.java:39)
> at 
> org.apache.activemq.transport.mqtt.MQTTSSLTest.initializeConnection(MQTTSSLTest.java:60)
> Results :
> Tests in error:
>   
> MQTTSSLTest>AbstractMQTTTest.testSendAndReceiveLargeMessages:247->initializeConnection:60
>  »

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4480) mkahadb with perDestination="true" lazily loads kahadb journal files after startup

2013-05-20 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662156#comment-13662156
 ] 

Timothy Bish commented on AMQ-4480:
---

Torsten, aren't these based on Windows normal Max length values?  If long names 
are used the user can increase the max length values and things should work as 
expected.

> mkahadb with perDestination="true" lazily loads kahadb journal files after 
> startup
> --
>
> Key: AMQ-4480
> URL: https://issues.apache.org/jira/browse/AMQ-4480
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.7.0, 5.8.0
>Reporter: Torsten Mielke
>
> Using the following mKahaDB config:
> {code:xml}
> 
>   
> 
>   
>   
> 
>   
>   
> 
>   
> 
> {code}
> Note perDestination="true". 
> Using that configuration and sending a message to a JMS queue whose name is 
> longer than 50 characters, this destination's messages won't be loaded 
> eagerly upon a restart of the broker. As a result that destination does not 
> show up in JMX. 
> Only when a producer or consumer connects to this destination, this 
> destination gets loaded from kahadb as this broker log output confirms
> {noformat}
> INFO | KahaDB is version 4
> INFO | Recovering from the journal ...
> INFO | Recovery replayed 1 operations from the journal in 0.0010 seconds.
> {noformat}
> This log output is written after the broker had completely started up. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-3906) repeated error message regarding chunk stream logged

2013-05-20 Thread Matthias Ronge (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662157#comment-13662157
 ] 

Matthias Ronge commented on AMQ-3906:
-

“Hello, I am out of office until May the 27th, 2013. During that time I will 
have no access to my e-mail account. Your email will not be forewarded. In 
urgent cases send an e-mail to f...@zeutschel.de, i...@zeutschel.de or please 
call +49 (7071) 97060 and you will be passed on to a competent person in our 
company. With best regards, Matthias Ronge, Zeutschel GmbH”

„Guten Tag, ich bin bis zum 27. Mai 2013 außer Haus. Ihre E-Mail wird nicht 
weitergeleitet. In dringenden Fällen senden sie bitte eine E-Mail an 
f...@zeutschel.de oder i...@zeutschel.de . Sie erreichen uns auch unter +49 
(7071) 97060 und werden zu einem kompetenten Ansprechpartner verbunden. Mit 
freundlichen Grüßen, Matthias Ronge, Zeutschel GmbH“



> repeated error message regarding chunk stream logged
> 
>
> Key: AMQ-3906
> URL: https://issues.apache.org/jira/browse/AMQ-3906
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.6.0
> Environment: ActiveMQ 5.6.0 running on Linux FC10 x86
>Reporter: Rajib Rashid
> Attachments: activemq.xml, kahadb.zip
>
>
> after running normally for ~24 hours, warning messages/errors like below are 
> logged every 30 seconds:
> {code}
> 2012-06-27 14:33:31,532 org.apache.activemq.broker.region.Topic[ActiveMQ 
> Broker[ZyrionMessageBus] Scheduler]: (WARN) Failed to browse Topic: 
> remoteUpdateP2PTopic
> java.io.EOFException: Chunk stream does not exist, page: 50 is marked free
> at org.apache.kahadb.page.Transaction$2.readPage(Transaction.java:460)
> at org.apache.kahadb.page.Transaction$2.(Transaction.java:437)
> at 
> org.apache.kahadb.page.Transaction.openInputStream(Transaction.java:434)
> at org.apache.kahadb.page.Transaction.load(Transaction.java:410)
> at org.apache.kahadb.page.Transaction.load(Transaction.java:367)
> at org.apache.kahadb.index.BTreeIndex.loadNode(BTreeIndex.java:262)
> at org.apache.kahadb.index.BTreeIndex.getRoot(BTreeIndex.java:174)
> at org.apache.kahadb.index.BTreeIndex.iterator(BTreeIndex.java:232)
> at 
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.(MessageDatabase.java:2714)
> at 
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2696)
> at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:525)
> at org.apache.kahadb.page.Transaction.execute(Transaction.java:769)
> at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:521)
> at 
> org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
> at org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:559)
> at org.apache.activemq.broker.region.Topic.access$100(Topic.java:62)
> at org.apache.activemq.broker.region.Topic$6.run(Topic.java:684)
> at 
> org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
> at java.util.TimerThread.mainLoop(Timer.java:512)
> at java.util.TimerThread.run(Timer.java:462)
> {code}
> since then the warning has been logged 6000+ times. not sure if this is due 
> to the fact that we have enabled expiration of queued messages for offline 
> subscribers.
> {code}
> % ls -l apps/activemq/data/kahadb/
> total 32068
> -rw-r--r-- 1 root root 33030144 2012-06-29 14:44 db-14.log
> -rw-r--r-- 1 root root   339968 2012-06-29 14:44 db.data
> -rw-r--r-- 1 root root   196984 2012-06-29 14:44 db.redo
> -rw-r--r-- 1 root root0 2012-06-26 16:40 lock
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (AMQ-3906) repeated error message regarding chunk stream logged

2013-05-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-3906.
-

Resolution: Incomplete

Closing as no feedback in a couple months from the reporter.  The issue appears 
to have been resolved for them.  

> repeated error message regarding chunk stream logged
> 
>
> Key: AMQ-3906
> URL: https://issues.apache.org/jira/browse/AMQ-3906
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.6.0
> Environment: ActiveMQ 5.6.0 running on Linux FC10 x86
>Reporter: Rajib Rashid
> Attachments: activemq.xml, kahadb.zip
>
>
> after running normally for ~24 hours, warning messages/errors like below are 
> logged every 30 seconds:
> {code}
> 2012-06-27 14:33:31,532 org.apache.activemq.broker.region.Topic[ActiveMQ 
> Broker[ZyrionMessageBus] Scheduler]: (WARN) Failed to browse Topic: 
> remoteUpdateP2PTopic
> java.io.EOFException: Chunk stream does not exist, page: 50 is marked free
> at org.apache.kahadb.page.Transaction$2.readPage(Transaction.java:460)
> at org.apache.kahadb.page.Transaction$2.(Transaction.java:437)
> at 
> org.apache.kahadb.page.Transaction.openInputStream(Transaction.java:434)
> at org.apache.kahadb.page.Transaction.load(Transaction.java:410)
> at org.apache.kahadb.page.Transaction.load(Transaction.java:367)
> at org.apache.kahadb.index.BTreeIndex.loadNode(BTreeIndex.java:262)
> at org.apache.kahadb.index.BTreeIndex.getRoot(BTreeIndex.java:174)
> at org.apache.kahadb.index.BTreeIndex.iterator(BTreeIndex.java:232)
> at 
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.(MessageDatabase.java:2714)
> at 
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2696)
> at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:525)
> at org.apache.kahadb.page.Transaction.execute(Transaction.java:769)
> at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:521)
> at 
> org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
> at org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:559)
> at org.apache.activemq.broker.region.Topic.access$100(Topic.java:62)
> at org.apache.activemq.broker.region.Topic$6.run(Topic.java:684)
> at 
> org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
> at java.util.TimerThread.mainLoop(Timer.java:512)
> at java.util.TimerThread.run(Timer.java:462)
> {code}
> since then the warning has been logged 6000+ times. not sure if this is due 
> to the fact that we have enabled expiration of queued messages for offline 
> subscribers.
> {code}
> % ls -l apps/activemq/data/kahadb/
> total 32068
> -rw-r--r-- 1 root root 33030144 2012-06-29 14:44 db-14.log
> -rw-r--r-- 1 root root   339968 2012-06-29 14:44 db.data
> -rw-r--r-- 1 root root   196984 2012-06-29 14:44 db.redo
> -rw-r--r-- 1 root root0 2012-06-26 16:40 lock
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4545) ActiveMQ Ajax API does not provide support for setting JMS properties.

2013-05-20 Thread Christian Posta (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662104#comment-13662104
 ] 

Christian Posta commented on AMQ-4545:
--

We can take a look when time permits, but maybe you can submit a patch for this?

> ActiveMQ Ajax API does not provide support for setting JMS properties.
> --
>
> Key: AMQ-4545
> URL: https://issues.apache.org/jira/browse/AMQ-4545
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.8.0
>Reporter: Bhanu
>
> ActiveMQ Ajax API i.e amq.js does not have any support for setting message 
> properties. The sendMessage() call accepts only two parameters:- destination 
> and message. Can this be enhanced to support sending message properties like 
> JMSReplyTo, JMSCorrelationID etc.
> Thanks,
> Bhanu

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: does master sync to disk on successful replication?

2013-05-20 Thread Hiram Chirino
quorum_mem just implies it will use remote_mem and local_mem.
local_mem, does not imply remote mem.

Just committed a change so that if you only configure 'local_mem' or
'local_disk' , then we won't sync against remote machines at all.
Handy if you just want async replication to a slave (perhaps is is in
a different datacenter and the WAN has high latency).



On Sun, May 19, 2013 at 12:31 AM, Christian Posta
 wrote:
> Hiram,
>
> Looks pretty flexible.
>
> So what's the main distinction between configuring for "remote_mem" vs
> "quorum_mem"?
> And does "local_mem" imply "remote_mem"?
>
>
> On Fri, May 17, 2013 at 7:33 AM, Hiram Chirino wrote:
>
>> Hi Christian,
>>
>> Ok. I've implemented being able to control how the syncs are done.
>> Take a peek at the doco for the sync property at:
>>
>> https://cwiki.apache.org/confluence/display/ACTIVEMQ/Replicated+LevelDB+Store#ReplicatedLevelDBStore-ReplicatedLevelDBStoreProperties
>>
>> Let me know what you think.
>>
>> On Thu, May 9, 2013 at 10:05 AM, Christian Posta
>>  wrote:
>> > All,
>> > chatted with Hiram about how syncs on replicated leveldb works... didn't
>> > mean for it to be private :) I'm forwarding the email thread...
>> >
>> > See the discussion below and add any comments/thoughts as desired..
>> >
>> > Thanks,
>> > Christian
>> >
>> > -- Forwarded message --
>> > From: Hiram Chirino 
>> > Date: Thu, May 9, 2013 at 6:31 AM
>> > Subject: Re: does master sync to disk on successful replication?
>> > To: Christian Posta 
>> >
>> >
>> > Yeah think your right.. might be better of with something like
>> > syncTo="":
>> > where  can be space separated list of:
>> >  * disk - Sync to the local disk
>> >  * replica - Sync to remote replica's memory
>> >  * replicaDisk - Sync to remote replicas disk.
>> >
>> > And we just default that to replica.
>> >
>> > On Thu, May 9, 2013 at 9:16 AM, Christian Posta
>> >  wrote:
>> >> But i think we need sync to be true for the replication as it stand
>> right
>> >> now? If sync option is true then we hit this line in the client's store
>> >> method which is the hook into the replication:
>> >>
>> >> if( syncNeeded && sync ) {
>> >>   appender.force
>> >> }
>> >>
>> >> If we change to false, then replication won't be kicked off. We could
>> > remove
>> >> the && sync, but then persistent messages would be sync'd even if
>> >> sync==false... prob don't want.
>> >>
>> >> *might* need another setting "forceReplicationSyncToDisk" or
>> something...
>> >> or.. move the replication out of the appender.force method... in
>> activemq
>> >> 5.x you have the following in DataFileAppender which delegates to a
>> >> replicator:
>> >>
>> >> ReplicationTarget replicationTarget =
>> >> journal.getReplicationTarget();
>> >> if( replicationTarget!=null ) {
>> >>
>> >> replicationTarget.replicate(wb.writes.getHead().location, sequence,
>> >> forceToDisk);
>> >> }
>> >>
>> >>
>> >> On Thu, May 9, 2013 at 6:02 AM, Hiram Chirino 
>> wrote:
>> >>>
>> >>> Yeah... perhaps we keep using the sync config option, just change the
>> >>> default to false in the replicated scenario.
>> >>>
>> >>> Very hard to verify proper operation of fsync.
>> >>>
>> >>> Best way I've found is by comparing performance of writes followed by
>> >>> fsync and and writes not followed by fsync.  Then looking at the
>> >>> numbers and comparing it to the hardware being used and seeing if it
>> >>> makes sense.  On a spinning disk /w out battery backed write cache,
>> >>> you should not get more than 100-300 writes per second /w fsync.  But
>> >>> once you start looking at SDDs or battery backed write cache hardware,
>> >>> then that assumption goes out the window.
>> >>>
>> >>>
>> >>> On Thu, May 9, 2013 at 8:48 AM, Christian Posta
>> >>>  wrote:
>> >>> > Your thoughts above make sense. Maybe we can add the option and leave
>> > it
>> >>> > disabled for now?
>> >>> > I can write a test for it and do it. As fsync vs fflush are quite OS
>> >>> > dependent, do you know of a good way to write tests to verify fsync?
>> >>> > Just
>> >>> > read the contents from the file?
>> >>> >
>> >>> >
>> >>> > On Wed, May 8, 2013 at 7:02 PM, Hiram Chirino 
>> > wrote:
>> >>> >>
>> >>> >> Nope. your not missing anything.  Instead of disk syncing, we are
>> >>> >> doing replica syncing.  If the master dies and he looses some of his
>> >>> >> recent log entries, it's not a big deal since we can recover from
>> the
>> >>> >> log file of the slave.
>> >>> >>
>> >>> >> The only time you could possibly loose data is in the small
>> likelihood
>> >>> >> that the master and the salve machines die at the same time.  But if
>> >>> >> that is likely to happen your really don't have a very HA
>> deployment.
>> >>> >>
>> >>> >> But if folks do think that's a possibility, then perhaps we should
>> add
>> >>> >> an option to really disk sync.
>> >>> >>
>> >>> >> On Wed, May 8, 2013 at 6:06 PM, Christian Posta
>> >>> >>